Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Cleanup examples folder #14: Add example script for policy (RLModule) inference on new API stack. #45831

Conversation

sven1977
Copy link
Contributor

@sven1977 sven1977 commented Jun 10, 2024

Add example script for policy (RLModule) inference on new API stack.

  • Uses the existing policy inference example script and converts it to the new stack.

Why are these changes needed?

Related issue number

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

Signed-off-by: sven1977 <[email protected]>
…nup_examples_folder_14_policy_inference_examples
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
@sven1977 sven1977 added the tests-ok The tagger certifies test failures are unrelated and assumes personal liability. label Jun 11, 2024
@sven1977 sven1977 enabled auto-merge (squash) June 11, 2024 08:12
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Jun 11, 2024
Copy link
Collaborator

@simonsays1980 simonsays1980 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

policy_id="default_policy", # <- default value
)
# Compute an action using a B=1 observation "batch".
input_dict = {Columns.OBS: torch.from_numpy(obs).unsqueeze(0)}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we use here the input specs of the module to infer the keys? Here it’s simple but in other scenarios it could help users to figure out what to feed in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not 100% sure. The input specs don't tell us much about what exact data is required for the keys in the specs. I'm honestly thinking about removing the specs altogether at some point (long-term).

For example: If my RLModule - right now - says: I need obs and prev_rewards, then I still don't know for example, how many of the previous rewards are required. This detailed information - crucial for building the batch - is not something my model would tell me, I will have to provide a proper ConnectorV2 logic along with it.

# Send the computed action `a` to the env.
obs, reward, done, truncated, _ = env.step(a)
episode_reward += reward
obs, reward, terminated, truncated, _ = env.step(action)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a beautiful example to show how simple this runs now. Let us think about some ways to simplify it also for modules using connectors.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome! Yeah, I agree. Doing these examples feel very fast and easy. I don't have to do much debugging at all to make these run right from the get-go. There is another PR that does something very similar, but with a connector (that handles the LSTM states).

@sven1977 sven1977 merged commit 94937a1 into ray-project:master Jun 11, 2024
7 of 8 checks passed
@sven1977 sven1977 deleted the cleanup_examples_folder_14_policy_inference_examples branch June 11, 2024 12:28
richardsliu pushed a commit to richardsliu/ray that referenced this pull request Jun 12, 2024
…r policy (RLModule) inference on new API stack. (ray-project#45831)

Signed-off-by: Richard Liu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests tests-ok The tagger certifies test failures are unrelated and assumes personal liability.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants