Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Cleanup examples folder 07: Translate custom_experiment, custom_logger, custom_progress_reporter to new API stack. #44735

Conversation

sven1977
Copy link
Contributor

Cleanup examples folder 07: Translate custom_experiment, custom_logger, custom_progress_reporter to new API stack.

Plus:

  • Fix a bug in Algorithm. We used to NOT update the weights of the local EnvRunner upon Algorithm.restore (when on new API stack). Yes, this was done eventually, but the weights immediately after the restore() call on the local EnvRunner (and all remote EnvRunners) are wrong.
  • Enhance example script running utility function.
  • Add missing docstrings to more example scripts.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…nup_examples_folder_07_custom_experiment_logger_trainfn

Signed-off-by: sven1977 <[email protected]>

# Conflicts:
#	rllib/examples/checkpoints/checkpoint_by_custom_criteria.py
#	rllib/examples/curriculum/curriculum_learning.py
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…nup_examples_folder_07_custom_experiment_logger_trainfn
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. ANother bunch of awesome examples!

| 71.7485 | 100000 | 476.51 | 476.51 |
+------------------+--------+----------+--------------------+

When running without parallel evaluation (`--evaluation-not-parallel-to-training` flag),
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah this is really cool! Parallel to training

# TODO (sven): Find out why we require this hack.
import os

os.environ["RAY_AIR_NEW_OUTPUT"] = "0"
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it would help users, if we describe in one line why we set this env variable.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do this in the line above:

# Force Tuner to use old progress output as the new one silently ignores our custom
# `CLIReporter`.

@sven1977 sven1977 merged commit 460ca3b into ray-project:master Apr 16, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants