-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Add "official" benchmark script for Atari PPO benchmarks (new API stack). #45697
Conversation
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
…nce_env_render_examples
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM.
commands.append(f"--env={env_name}") | ||
commands.append(f"--wandb-run-name={env_name}") | ||
print(f"Running {env_name} through command line=`{commands}`") | ||
subprocess.run(commands) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is somehow strange to me that we emulate running from the command line which in turn runs a script that could have been triggered direclty in the loop. It makes sense to me that users can run single envs, but why not triggering them directly in the loop?
# AgileRL: https://github.com/AgileRL/AgileRL?tab=readme-ov-file#benchmarks | ||
# [0] = reward to expect for DQN rainbow [1] = timesteps to run (always 200M for DQN | ||
# rainbow). | ||
# Note that for PPO, we simply run everything for 6M ts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In atari_ppo.py
the timesteps
are set to 3M
.
parser = add_rllib_example_script_args() | ||
parser = add_rllib_example_script_args( | ||
default_reward=float("inf"), | ||
default_timesteps=3000000, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here we have set 3M
timesteps while above in the benchmark_atari_ppo.py
we comment on 6M
.
… API stack). (ray-project#45697) Signed-off-by: Richard Liu <[email protected]>
Add "official" benchmark script for Atari PPO benchmarks (new API stack).
tuned_example
script passing through some command line args.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.