Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Use config (not self.config) in Learner.compute_loss_for_module to prepare these for multi-agent-capability. #45053

Merged

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Apr 30, 2024

Use config (not self.config) in Learner.compute_loss_for_module to prepare these for multi-agent-capability.

  • Also fixes the SAC compute_gradients to NOT use DEFAULT_POLICY anymore, but to actually loop through the different RLModules.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
…ics_do_over_03_learner_on_new_metrics_logger

Signed-off-by: sven1977 <[email protected]>

# Conflicts:
#	rllib/algorithms/ppo/ppo_learner.py
#	rllib/algorithms/ppo/tf/ppo_tf_learner.py
#	rllib/algorithms/ppo/torch/ppo_torch_learner.py
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 added the tests-ok The tagger certifies test failures are unrelated and assumes personal liability. label Apr 30, 2024
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. Great PR! Getting multi-agent off-policy ready.

@@ -72,7 +72,7 @@ def compute_loss_for_module(
trajectory_len=rollout_frag_or_episode_len,
recurrent_seq_len=recurrent_seq_len,
)
if self.config.enable_env_runner_and_connector_v2:
if config.enable_env_runner_and_connector_v2:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder, if the new env runners work on APPO/IMPALA. In my test case they do not in the MA case where a list of episodes is tried to be compressed_if_needed

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

IMPALA and APPO are WIP on the new EnvRunners, officially not supported yet.

https://docs.ray.io/en/master/rllib/rllib-new-api-stack.html

@@ -61,7 +61,7 @@ def compute_loss_for_module(
).squeeze()

# Use double Q learning.
if self.config.double_q:
if config.double_q:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch. I had the same changed in another PR on the side. This would have been led to some bugs in MA off-policy.

).items()
}
)
for module_id in set(loss_per_module.keys()) - {ALL_MODULES}:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! MA-ready.

…self_config_in_learner_compute_loss

Signed-off-by: sven1977 <[email protected]>

# Conflicts:
#	rllib/algorithms/sac/torch/sac_torch_learner.py
@sven1977 sven1977 merged commit d069247 into ray-project:master May 2, 2024
5 checks passed
@sven1977 sven1977 deleted the fix_self_config_in_learner_compute_loss branch May 2, 2024 13:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
tests-ok The tagger certifies test failures are unrelated and assumes personal liability.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants