Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Add example: Pre-train an RLModule single-agent, then bring checkpoint into multi-agent setup and continue training. #44674

Merged

Conversation

simonsays1980
Copy link
Collaborator

@simonsays1980 simonsays1980 commented Apr 11, 2024

Why are these changes needed?

So far, we have no example that shows users how to pre-train certain policies and load the checkpoints.

This PR shows users how to pre-train a module in single-agent mode and load its checkpoint in another training run into a MARL setup.

Related issue number

Related to #44263

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

…define the model config for 'RLmodule' in a unified way without interfering with the old stack. Reconfigured DQN Rainbow with it.

Signed-off-by: Simon Zehnder <[email protected]>
…ordingly. In addition, fixed some typos.

Signed-off-by: Simon Zehnder <[email protected]>
…g_dict' in 'AlgorithmConfig.rl_module' as they were failing. Something is still wrong with the VisionNet in 'connector_v2_frame_stacking' example.

Signed-off-by: Simon Zehnder <[email protected]>
…emains b/c low priority.

Signed-off-by: Simon Zehnder <[email protected]>
Signed-off-by: Simon Zehnder <[email protected]>
…rl_module_api' needed a 'False' for error - so only wanring.

Signed-off-by: Simon Zehnder <[email protected]>
Signed-off-by: Simon Zehnder <[email protected]>
Signed-off-by: Simon Zehnder <[email protected]>
…ot using the corresponding default model configuration of the training algorithm. Also added a pre-training example for MARL.

Signed-off-by: Simon Zehnder <[email protected]>
…in single module and load its checkpoint into a MARL setting for one policy.

Signed-off-by: Simon Zehnder <[email protected]>
@sven1977 sven1977 changed the title RLModule pre-training example for multi-agent setup [RLlib] RLModule pre-training example for multi-agent setup. Apr 11, 2024
simonsays1980 and others added 5 commits April 11, 2024 18:08
… external module did not use the default model config of the algorithm.

Signed-off-by: Simon Zehnder <[email protected]>
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 marked this pull request as ready for review April 15, 2024 11:18
config = (
PPOConfig()
# Enable the new API stack (RLModule and Learner APIs).
.experimental(_enable_new_api_stack=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is done automatically by the run_rllib_example_script_experiment util.

marl_module_spec = MultiAgentRLModuleSpec(module_specs=module_specs)

# Register our environment with tune if we use multiple agents.
if args.num_agents > 0:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this if-block needed? We assert that this command line arg is >0 above.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I guess we can remove this here. Good catch @sven1977 !

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great catch! I removed this in the follow-up commit.

Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Super nice example and PR! Thanks @simonsays1980 !
Just a few nits and waiting for:

  1. We must add this great example to the BUILD!
  2. Can we rename the script into a more descriptive name? Like pretraining_single_agent_training_multi_agent <- something like this that more describes the exact sequence of things we do here.

@sven1977
Copy link
Contributor

Ok, cool! Can we also add this example script to BUILD?

@simonsays1980 simonsays1980 self-assigned this Apr 16, 2024
Signed-off-by: Simon Zehnder <[email protected]>
Signed-off-by: Simon Zehnder <[email protected]>
@@ -2873,7 +2873,14 @@ py_test(
size = "small",
srcs = ["examples/rl_modules/classes/mobilenet_rlm.py"],
)

py_test(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

rllib/utils/test_utils.py Outdated Show resolved Hide resolved
@sven1977 sven1977 merged commit d8c7234 into ray-project:master Apr 16, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants