-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Fix SAC/DQN/CQL GPU and multi-GPU. #47179
Conversation
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Great PR with a big achievement. Multi-GPU on SAC is awesome!
tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_discrete", "learning_tests_pytorch_use_all_core", "gpu"], | ||
size = "large", | ||
srcs = ["tuned_examples/dqn/cartpole_dqn.py"], | ||
args = ["--as-test", "--enable-new-api-stack", "--num-gpus=1"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does num-gpus=1
use a local or remote learner? Imo, we should test with both. What do you think @sven1977 ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For IMPALA/APPO, we should add a validation that these should never be run with a local Learner, b/c these are async algos that suffer tremendously from having the Learner not-async. Will add this check/error in a separate PR ...
tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_discrete", "learning_tests_pytorch_use_all_core", "gpu"], | ||
size = "large", | ||
srcs = ["tuned_examples/dqn/multi_agent_cartpole_dqn.py"], | ||
args = ["--as-test", "--enable-new-api-stack", "--num-agents=2", "--num-cpus=4", "--num-gpus=1"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting, I thought this does not work --num-gpus > 0
and --num-cpus > 0
:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point. We need to get rid of this confusion some time soon. Note that these are the command line options, not directly translatable to Algo config properties:
Here:
--num-cpus are the ray provided CPUs for the entire cluster.
--num-gpus are the number of Learner workers; note that if no GPUs are available, --num-gpus
still sets the number of Learner workers, but then each worker gets one CPU (instead of 1 GPU). :|
main = "tuned_examples/sac/multi_agent_pendulum_sac.py", | ||
tags = ["team:rllib", "exclusive", "learning_tests", "torch_only", "learning_tests_continuous"], | ||
size = "large", | ||
srcs = ["tuned_examples/sac/multi_agent_pendulum_sac.py"], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we actually need the srcs
for files that can be executed directly via python?
# Reduce EnvRunner metrics over the n EnvRunners. | ||
self.metrics.merge_and_log_n_dicts( | ||
env_runner_results, key=ENV_RUNNER_RESULTS | ||
) | ||
|
||
# Add the sampled experiences to the replay buffer. | ||
with self.metrics.log_time((TIMERS, REPLAY_BUFFER_ADD_DATA_TIMER)): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice :)
# here). This is different from doing `.detach()` or `with torch.no_grads()`, | ||
# as these two methds would fully block all gradient recordings, including | ||
# the needed policy ones. | ||
all_params = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Signed-off-by: sven1977 <[email protected]>
Fix DQN/SAC/CQL GPU and multi-GPU.
[DQN | SAC] x [single-agent | multi-agent] x [CPU Learner | GPU Learner | 2 CPU Learners | 2 GPU Learners]
.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.