-
Notifications
You must be signed in to change notification settings - Fork 409
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Scheduler no throw in destruction and avoid updated min-tso query hang #4367
Conversation
[REVIEW NOTIFICATION] This pull request has been approved by:
To complete the pull request process, please ask the reviewers in the list to review by filling The full list of commands accepted by this bot can be found here. Reviewer can indicate their review by submitting an approval review. |
Signed-off-by: fzhedu <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is not a good idea to schedule new task in the deconstructor, deconstructor should only do the clean up things. Can you find a way that deconstructor only release the used_threads
, and some other threads to schedue the new task?
@@ -87,42 +87,29 @@ void MinTSOScheduler::deleteCancelledQuery(const UInt64 tso, MPPTaskManager & ta | |||
waiting_set.erase(tso); | |||
GET_METRIC(tiflash_task_scheduler, type_waiting_queries_count).Set(waiting_set.size()); | |||
GET_METRIC(tiflash_task_scheduler, type_active_queries_count).Set(active_set.size()); | |||
LOG_FMT_DEBUG(log, "{} query {} (is min = {}) is deleted from active set {} left {} or waiting set {} left {}.", is_cancelled ? "Cancelled" : "Finished", tso, tso == min_tso, active_set.find(tso) != active_set.end(), active_set.size(), waiting_set.find(tso) != waiting_set.end(), waiting_set.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think both active_set.find(tso) != active_set.end()
and waiting_set.find(tso) != waiting_set.end()
will return false because you already erase them in L86-L87
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
o, my error
/// NOTE: if the cancelled query hang, it will block the min_tso, possibly resulting in deadlock. so here force scheduling waiting tasks of the updated min_tso query. | ||
if (updateMinTSO(tso, true, "when cancelling it.")) | ||
/// NOTE: if updated min_tso query has waiting tasks, they should be scheduled, especially when the soft-limited threads are amost used and active tasks are in resources deadlock which cannot release threads soon. | ||
if (updateMinTSO(tso, true, is_cancelled ? "when cancelling it" : "as deleting it")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as deleting it
=> as finish it
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok
if (isWaiting) | ||
{ | ||
/// cancel this task, then TiDB will finally notify this tiflash node canceling all tasks of this tso and update metrics. | ||
task->cancel(msg); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
will cancel
triger OPs of waiting_set
again?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, conduct the cancel processing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since cancel
will also triger OPs of waiting_set
, will L197 waiting_set.erase(tso)
be a duplicated op?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as it comments: /// avoid the left waiting tasks of this query reaching here many times.
Signed-off-by: fzhedu <[email protected]>
/run-all-tests |
Coverage for changed files
Coverage summary
full coverage report (for internal network access only) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Signed-off-by: fzhedu <[email protected]>
terminated as expected.
|
/run-all-tests |
Coverage for changed files
Coverage summary
full coverage report (for internal network access only) |
dbms/src/Flash/Mpp/MPPTask.cpp
Outdated
{ | ||
std::unique_lock lock(schedule_mu); | ||
schedule_cv.wait(lock, [&] { return schedule_state != ScheduleState::WAITING; }); | ||
time_cost = stopwatch.elapsedSeconds(); | ||
GET_METRIC(tiflash_task_scheduler_waiting_duration_seconds).Observe(time_cost); | ||
if (schedule_state == ScheduleState::FAILED) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would like to distinguish the cases that task is failed to schedule due to resource exhausted or due to the related query is cancelled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, I will add a enum flagh: EXCEEDED
Signed-off-by: fzhedu <[email protected]>
/run-all-tests |
if (schedule_state == ScheduleState::EXCEEDED) | ||
{ | ||
throw Exception("{} is failed to schedule because of exceeding the thread hard limit in min-tso scheduler.", id.toString()); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not throw error if the schedule state is FAILED?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the later processing in MppTask->runImp() will throw because this query is to be cancelled, whose state is not RUNNING..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But after the task is scheduled with FAILED
schedule state, two threads are running concurrently with
- runImpl thread begin to run
- cancel thread mark MPPTask as cancelled
There is no guarantee that rumImpl thread will see theCancelled
status right after it is waked up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated to this goal.
Coverage for changed files
Coverage summary
full coverage report (for internal network access only) |
Signed-off-by: fzhedu <[email protected]>
/run-all-tests |
Coverage for changed files
Coverage summary
full coverage report (for internal network access only) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
/merge |
@windtalker: It seems you want to merge this PR, I will help you trigger all the tests: /run-all-tests You only need to trigger If you have any questions about the PR merge process, please refer to pr process. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
This pull request has been accepted and is ready to merge. Commit hash: 3e563cc
|
@fzhedu: Your PR was out of date, I have automatically updated it for you. At the same time I will also trigger all tests for you: /run-all-tests If the CI test fails, you just re-trigger the test that failed and the bot will merge the PR for you after the CI passes. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the ti-community-infra/tichi repository. |
Coverage for changed files
Coverage summary
full coverage report (for internal network access only) |
In response to a cherrypick label: new pull request created: #4399. |
Signed-off-by: fzhedu [email protected]
What problem does this PR solve?
Issue Number: close #4366
Problem Summary:
What is changed and how it works?
Check List
Tests
Side effects
Documentation
Release note