-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Core] Allow spillback while hitting scheduling class cap #28804
Conversation
Signed-off-by: Jiajun Yao <[email protected]>
Signed-off-by: Jiajun Yao <[email protected]>
Signed-off-by: Jiajun Yao <[email protected]>
Signed-off-by: Jiajun Yao <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm I understand the PR description, but I'm a bit confused by the solution (probably just don't have the full context on the scheduler logic). Can you explain the individual changes a bit more? Maybe some comments for these changes would also help.
Also, I'm having some trouble understanding the effect on high-level behavior. Is there a Python-level test that shows the difference?
@@ -370,7 +374,7 @@ bool LocalTaskManager::TrySpillback(const std::shared_ptr<internal::Work> &work, | |||
bool &is_infeasible) { | |||
auto scheduling_node_id = cluster_resource_scheduler_->GetBestSchedulableNode( | |||
work->task.GetTaskSpecification(), | |||
work->PrioritizeLocalNode(), | |||
/*prioritize_local_node*/ true, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why was this change needed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, when a node is selected by the cluster task manager and the task is added to that node's local task manager queue. Then we should only spillback if local has no resource (in the other word, we should prefer staying local if possible).
@@ -228,11 +237,6 @@ void LocalTaskManager::DispatchScheduledTasksToWorkers() { | |||
internal::UnscheduledWorkCause::WAITING_FOR_RESOURCES_AVAILABLE); | |||
break; | |||
} | |||
num_unschedulable_task_spilled_++; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we move this code?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is moved into TrySpillback()
since this needs to be called whenever spill happens. By moving it inside TrySpillback() we make sure it always happens when spill also we avoid the code duplication since TrySpillback() now is called in two places.
Signed-off-by: Jiajun Yao <[email protected]>
Added a python test and some comments. Before, when a task is hitting the scheduling class cap, it needs to wait for the cap to be lifted (it can be a long time) before it has the chance to be spilled back. With this PR, it can be spilled back immediately. |
…t#28804) While we are hitting the scheduling class cap and cannot run the task locally, we should still see if we can run it on other nodes and spill back to them accordingly. This way, tasks won't be stuck on the local node waiting for the cap to be lifted. Signed-off-by: Jiajun Yao <[email protected]> Signed-off-by: Weichen Xu <[email protected]>
Signed-off-by: Jiajun Yao [email protected]
Why are these changes needed?
While we are hitting the scheduling class cap and cannot run the task locally, we should still see if we can run it on other nodes and spill back to them accordingly. This way, tasks won't be stuck on the local node waiting for the cap to be lifted.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.