-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tune/execution] Update staged resources in a fixed counter for faster lookup #32087
Merged
krfricke
merged 6 commits into
ray-project:master
from
krfricke:tune/cache-staged-resources
Jan 31, 2023
Merged
[tune/execution] Update staged resources in a fixed counter for faster lookup #32087
krfricke
merged 6 commits into
ray-project:master
from
krfricke:tune/cache-staged-resources
Jan 31, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Signed-off-by: Kai Fricke <[email protected]>
Signed-off-by: Kai Fricke <[email protected]>
cadedaniel
reviewed
Jan 31, 2023
xwjiang2010
reviewed
Jan 31, 2023
Signed-off-by: Kai Fricke <[email protected]>
Signed-off-by: Kai Fricke <[email protected]>
rkooo567
approved these changes
Jan 31, 2023
xwjiang2010
approved these changes
Jan 31, 2023
Signed-off-by: Kai Fricke <[email protected]>
edoakes
pushed a commit
to edoakes/ray
that referenced
this pull request
Mar 22, 2023
…r lookup (ray-project#32087) In ray-project#30016 we migrated Ray Tune to use a new resource management interface. In the same PR, we simplified the resource consolidation logic. This lead to a performance regression first identified in ray-project#31337. After manual profiling, the regression seems to come from `RayTrialExecutor._count_staged_resources`. We have 1000 staged trials, and this function is called on every step, executing a linear scan through all trials. This PR fixes this performance bottleneck by keeping state of the resource counter instead of dynamically recreating it every time. This is simple as we can just add/subtract the resources whenever we add/remove from the `RayTrialExecutor._staged_trials` set. Manual testing confirmed this improves the runtime of `tune_scalability_result_throughput_cluster` from ~132 seconds to ~122 seconds, bringing it back to the same level as before the refactor. Signed-off-by: Kai Fricke <[email protected]> Signed-off-by: Edward Oakes <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Signed-off-by: Kai Fricke [email protected]
Why are these changes needed?
In #30016 we migrated Ray Tune to use a new resource management interface. In the same PR, we simplified the resource consolidation logic. This lead to a performance regression first identified in #31337.
After manual profiling, the regression seems to come from
RayTrialExecutor._count_staged_resources
. We have 1000 staged trials, and this function is called on every step, executing a linear scan through all trials.This PR fixes this performance bottleneck by keeping state of the resource counter instead of dynamically recreating it every time. This is simple as we can just add/subtract the resources whenever we add/remove from the
RayTrialExecutor._staged_trials
set.Manual testing confirmed this improves the runtime of
tune_scalability_result_throughput_cluster
from ~132 seconds to ~122 seconds, bringing it back to the same level as before the refactor.Related issue number
Closes #32077
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.