Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib; Offline RL] Offline performance cleanup. #47731

Merged
merged 16 commits into from
Sep 25, 2024

Conversation

simonsays1980
Copy link
Collaborator

@simonsays1980 simonsays1980 commented Sep 18, 2024

Why are these changes needed?

The map_batches call in offline RL learning used to be very slow for unknown reasons. This PR proposes multiple changes to the offline data pipeline to boost performance by several multitudes. These changes are

  • Materialization of raw data in memory if resources are available with the option materialize_data (default is False) such that users can control memory usage.
  • Materialization of mapped data in memory if resources are available with the option materialize_mapped_data (default is True) such htat users can control memory usage. This materialization applies the OfflinePreLearner on the raw data a priori and can be used by algorithms that do not have connector pipelines (ConnectorV2 pipelines) that need up-to-date RLModule and/or states (e.g. BC or CQL).
  • An iterator that is instantiated once and is reinitiated whenever exhausted for the single-learner case (in the multi-learner case iterators are built on the remote learners anyways)
  • A batch size of 1 after the map_batches call because rows contain now MultiAgentBatches with train_batch_size_per_learner environment steps each.

In addition, it fixes an important error in MARWIL's loss which ignored training the value function.

These changes lead to enormous performance boosts:

  • Learning CartPole-v1 with BC in single-learner mode below 7 secs (multi-learner mode < 12 secs).
  • Learning CartPole-v1 with MARWIL in single-learner mode below 50 secs (multi-learner mode < 217 secs)
  • Learning Pendulum-v1" with CQL` in single-learner mode below 311 secs (multi-learner mode < 116 secs)

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

…'. This was initialized at each iteration and slowed down our 'OfflineData' sampling. Ina ddition tuned all Offline examples for the changes made.

Signed-off-by: simonsays1980 <[email protected]>
…e added an option for users to materialize the dataset if needed and enough memory is available.

Signed-off-by: simonsays1980 <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
…ialization and reinitialization of iterators in single-learner mode. Furthermore, changed after-mapping batch size to 1 b/c rows are then 'MultiAgentBatches' of 'train_batch_size_per_learner' environment steps each. In addition added two further options to 'AlgorithmConfig' such that users can control memory usage and performance.

Signed-off-by: simonsays1980 <[email protected]>
…alue function output from 'forward_train' and did therefore not train the value function.

Signed-off-by: simonsays1980 <[email protected]>
…d and is already converted to a generator we need to rebuild it.

Signed-off-by: simonsays1980 <[email protected]>
@@ -204,6 +204,12 @@ def add_rllib_example_script_args(
help="How many (tune.Tuner.fit()) experiments to execute - if possible in "
"parallel.",
)
parser.add_argument(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


# Define the config for Behavior Cloning.
config = (
BCConfig()
.environment(
env="WrappedALE/Pong-v5",
# TODO (sven): Does this have any influence in connectors?
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great point! You are right and this setting is NOT propagated to the connectors. Not relevant for Pong as its rewards are all 1 anyways, but for other Atari benchmarks, this could matter.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification. There is actually another one:

        # TODO (sven): Has this any influence in the connectors?
        actions_in_input_normalized=True,

Does this have an influence - or should it? It is not recorgnized, yet, in the offline API.

simonsays1980 and others added 3 commits September 19, 2024 14:46
Co-authored-by: Sven Mika <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
Co-authored-by: Sven Mika <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Approved! Thanks @simonsays1980 for this awesome PR. :)

@sven1977 sven1977 changed the title [RLlib; Offline RL] - Offline performance cleanup. [RLlib; Offline RL] Offline performance cleanup. Sep 19, 2024
@sven1977 sven1977 enabled auto-merge (squash) September 19, 2024 13:41
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Sep 19, 2024
@sven1977 sven1977 merged commit f17bb99 into ray-project:master Sep 25, 2024
5 checks passed
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants