-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] To improve performance, do not wait for sync weight calls by default. #30509
Conversation
…default. Signed-off-by: Jun Gong <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a couple of Qs. looks good o.w.
@@ -397,6 +398,10 @@ def sync_weights( | |||
weights to. If None (default), sync to all remote workers. | |||
global_vars: An optional global vars dict to set this | |||
worker to. If None, do not update the global_vars. | |||
timeout_seconds: Timeout in seconds to wait for the sync weights |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What would None do then?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
indefinitely until all the object_refs are ready.
this is actually all standard ray.wait behavior. look at documentation for ray.wait(timeout=...)
.
https://docs.ray.io/en/latest/ray-core/package-ref.html#ray-wait
the default timeout is None.
def set_weight(w): | ||
w.set_weights(ray.get(weights_ref), global_vars) | ||
w.set_weights(weights, global_vars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would from_worker.get_weights(policies)
return an object ref, or the actual weights?
If it's an object ref, would set_weight
work for the case where from_worker is a remote worker? If this is the case do we test this behavior in a unit test?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from_worker has to be a local RolloutWorker. so weights
here must be raw weights.
the reason we have from_worker is that oftentimes, evaluation_worker_set doesn't have a local worker to sync from.
you need to sync weights from rollout_workers.local_worker() to evaluation_workers.remote_workers().
also, the reason I got rid of ray.get/put here is that, when testing everything, I noticed some slight improvements if we don't force ray.put on every single weights dict. seems like Ray core may optimize things and say if all the remote workers are on the same instance, skip serialization and simply copy over the data. need to confirm this though.
Signed-off-by: Jun Gong <[email protected]>
Signed-off-by: Jun Gong <[email protected]>
I confirmed that this PR fixes the A3C regression issues. |
…default. (ray-project#30509) Also batch weight sync calls, and skip synching to local worker. Signed-off-by: Jun Gong <[email protected]> Signed-off-by: Weichen Xu <[email protected]>
Signed-off-by: Jun Gong [email protected]
Why are these changes needed?
This improves throughput by almost 2x for many of our algorithms.
As an example,
A3C:
This was also the default behavior before elastic training PR.
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.