-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
store/copr: revert pr/35975, do not reduce concurrency for keep order coprocessor request #55141
Conversation
Hi @tiancaiamao. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #55141 +/- ##
================================================
+ Coverage 72.8357% 74.7715% +1.9358%
================================================
Files 1563 1564 +1
Lines 438992 440293 +1301
================================================
+ Hits 319743 329214 +9471
+ Misses 99562 90861 -8701
- Partials 19687 20218 +531
Flags with carried forward coverage won't be shown. Click here to find out more.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: lcwangchao, mjonss The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[LGTM Timeline notifier]Timeline:
|
/hold |
Reproduce step: create a cluster
prepare data:
run an extra tidb using current PR on port 4001, to compare with tidb nightly on port 4000:
|
keep order reduce concurrency, explan analyze takes 7.95s
keep order without reduce concurrency, 2.45s
|
Now we focus on this query explan analyze, let keep order coprocessor reduce concurrency, it takes 11.18s
And if keep order coprocessor request do not reduce concurrency, the same query takes 5.26s
So far so good. The surprise comes from the reading resultset from mysql client:
Why? why concurrency does not help in such case, while it seems much better in 'explain analyze'? |
Conclusion: It shows that when the result set data is large, the performance bottleneck is that the client receives data through the MySQL protocol, rather than concurrently reading from tikv in tidb. Because the client receives data with only 1 connection, there is no concurrency. In scenarios where the result set is small, MySQL Client reading speed is not a bottleneck, and TiDB memory usage is not a problem, then the higher the concurrency, the better. |
/unhold |
… coprocessor request (pingcap#55141) ref pingcap#54969
In response to a cherrypick label: new pull request created to branch |
In response to a cherrypick label: new pull request created to branch |
In response to a cherrypick label: new pull request created to branch |
In response to a cherrypick label: new pull request created to branch |
What problem does this PR solve?
Issue Number: close #54969
Problem Summary:
What changed and how does it work?
In the past, #35975 reduce coprocessor concurrency to 2 for the keep order query.
But recently we meet an oncall case that the change cause performance regression.
Balance the memory usage and concurrency is a difficult job,
we need more tests and expirements to reach the conclusion.Currently we can simply set TiFlash to not reduce concurrency, when a user using TiFlash, the tidb memory usage would not be a top concern (their hardware must be better).Check List
Tests
See comments below
One line change, just revert part of the previous optimization.
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.