-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[train] Fix ScalingConfig(accelerator_type)
to request a small fraction of the accelerator label
#44225
[train] Fix ScalingConfig(accelerator_type)
to request a small fraction of the accelerator label
#44225
Conversation
Signed-off-by: Justin Yu <[email protected]>
Signed-off-by: Justin Yu <[email protected]>
@@ -206,7 +206,7 @@ def _resources_per_worker_not_none(self): | |||
|
|||
if self.accelerator_type: | |||
accelerator = f"{RESOURCE_CONSTRAINT_PREFIX}{self.accelerator_type}" | |||
resources_per_worker.setdefault(accelerator, 1) | |||
resources_per_worker.setdefault(accelerator, 0.001) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a constant (or use an existing one if it already exists)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ray/python/ray/_private/utils.py
Lines 389 to 392 in 2747c80
if accelerator_type is not None: | |
resources[ | |
f"{ray_constants.RESOURCE_CONSTRAINT_PREFIX}{accelerator_type}" | |
] = 0.001 |
Seems core team directly use 0.001 here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jjyao cool if we extract this into a constant? Gives it some concrete meaning 🙂
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix!
…tion of the accelerator label (ray-project#44225) Make Ray Train's accelerator type resource request match Ray Core by setting it to a fractional value (0.001). This is needed to fix autoscaling behavior to request the correct number of GPUs. Signed-off-by: Justin Yu <[email protected]>
Why are these changes needed?
accelerator_type
is currently implemented as a custom resource with a quantity of 1 if an instance has an accelerator of that type. For example, both a machine with 1 A10G GPU and a machine with 4 A10G GPUs will have{"accelerator_type:A10G": 1.0}
. This label is just an indicator of whether the machine contains the accelerator, rather than a count of the number of accelerators of that type.This PR makes our accelerator type resource request match Ray Core by setting it to a fractional value (0.001). This is needed to fix autoscaling behavior to request the correct number of GPUs.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.