Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core] Allow spillback while hitting scheduling class cap #28804

Merged
merged 7 commits into from
Sep 30, 2022

Conversation

jjyao
Copy link
Collaborator

@jjyao jjyao commented Sep 27, 2022

Signed-off-by: Jiajun Yao [email protected]

Why are these changes needed?

While we are hitting the scheduling class cap and cannot run the task locally, we should still see if we can run it on other nodes and spill back to them accordingly. This way, tasks won't be stuck on the local node waiting for the cap to be lifted.

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@jjyao jjyao marked this pull request as ready for review September 29, 2022 00:12
@jjyao jjyao changed the title Allow spillback while hitting scheduling class cap [Core] Allow spillback while hitting scheduling class cap Sep 29, 2022
Copy link
Contributor

@stephanie-wang stephanie-wang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm I understand the PR description, but I'm a bit confused by the solution (probably just don't have the full context on the scheduler logic). Can you explain the individual changes a bit more? Maybe some comments for these changes would also help.

Also, I'm having some trouble understanding the effect on high-level behavior. Is there a Python-level test that shows the difference?

@@ -370,7 +374,7 @@ bool LocalTaskManager::TrySpillback(const std::shared_ptr<internal::Work> &work,
bool &is_infeasible) {
auto scheduling_node_id = cluster_resource_scheduler_->GetBestSchedulableNode(
work->task.GetTaskSpecification(),
work->PrioritizeLocalNode(),
/*prioritize_local_node*/ true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why was this change needed?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, when a node is selected by the cluster task manager and the task is added to that node's local task manager queue. Then we should only spillback if local has no resource (in the other word, we should prefer staying local if possible).

@@ -228,11 +237,6 @@ void LocalTaskManager::DispatchScheduledTasksToWorkers() {
internal::UnscheduledWorkCause::WAITING_FOR_RESOURCES_AVAILABLE);
break;
}
num_unschedulable_task_spilled_++;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we move this code?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is moved into TrySpillback() since this needs to be called whenever spill happens. By moving it inside TrySpillback() we make sure it always happens when spill also we avoid the code duplication since TrySpillback() now is called in two places.

@stephanie-wang stephanie-wang added the @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. label Sep 30, 2022
Signed-off-by: Jiajun Yao <[email protected]>
@jjyao
Copy link
Collaborator Author

jjyao commented Sep 30, 2022

Added a python test and some comments.

Before, when a task is hitting the scheduling class cap, it needs to wait for the cap to be lifted (it can be a long time) before it has the chance to be spilled back. With this PR, it can be spilled back immediately.

@jjyao jjyao removed the @author-action-required The PR author is responsible for the next step. Remove tag to send back to the reviewer. label Sep 30, 2022
@stephanie-wang stephanie-wang merged commit da3edda into ray-project:master Sep 30, 2022
@jjyao jjyao deleted the jjyao/cap_spill branch October 1, 2022 04:15
WeichenXu123 pushed a commit to WeichenXu123/ray that referenced this pull request Dec 19, 2022
…t#28804)

While we are hitting the scheduling class cap and cannot run the task locally, we should still see if we can run it on other nodes and spill back to them accordingly. This way, tasks won't be stuck on the local node waiting for the cap to be lifted.

Signed-off-by: Jiajun Yao <[email protected]>
Signed-off-by: Weichen Xu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants