Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RLlib] Update autoregressive actions example. #47829

Merged

Conversation

simonsays1980
Copy link
Collaborator

@simonsays1980 simonsays1980 commented Sep 26, 2024

Why are these changes needed?

The autoregressive actions example had an environment in which the agent ould cheat by looking only on the state when defining both actions, a1 and a2. This PR proposes a new environment to test autoregressive actions modules in which the agent has to watch both the state and the action a1 to define the action a2 optimally. Rewards are based on the absolute negative deviance between the desired action for a2 and its actual counterpart.

Furthermore, this PR introduces the ValueFunctionAPI for the AutoregressiveActionsRLM in the corresponding example which simplifies code and fixes actually an error due to the old _compute_values definition.

Related issue number

Closes #44662

Checks

  • I've signed off every commit(by using the -s flag, i.e., git commit -s) in this PR.
  • I've run scripts/format.sh to lint the changes in this PR.
  • I've included any doc changes needed for https://docs.ray.io/en/master/.
    • I've added any new APIs to the API Reference. For example, if I added a
      method in Tune, I've added it in doc/source/tune/api/ under the
      corresponding .rst file.
  • I've made sure the tests are passing. Note that there might be a few flaky tests, see the recent failures at https://flakey-tests.ray.io/
  • Testing Strategy
    • Unit tests
    • Release tests
    • This PR is not tested :(

@simonsays1980 simonsays1980 marked this pull request as ready for review September 26, 2024 17:22
@sven1977 sven1977 changed the title [RLlib] - Update autoregressive actions example [RLlib] Update autoregressive actions example. Sep 26, 2024
@@ -160,6 +160,13 @@ class _MLPConfig(ModelConfig):
"_" are allowed.
output_layer_bias_initializer_config: Configuration to pass into the
initializer defined in `output_layer_bias_initializer`.
clip_log_std: If the log std should be clipped by `log_std_clip_param`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I feel like this comment is confusing. We should write that clipping is only applied to those action distribution parameters that encode the log-std for a DiagGaussian action distribution. Any other node's output (or if there is no DiagGaussian) is not clipped.

Mentioning the value function makes it confusing.

@@ -187,6 +194,8 @@ def build_pi_head(self, framework: str) -> Model:
hidden_layer_activation=self.pi_and_qf_head_activation,
output_layer_dim=required_output_dim,
output_layer_activation="linear",
clip_log_std=is_diag_gaussian,
log_std_clip_param=self._model_config_dict["log_std_clip_param"],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we do .get here to be defensive against any custom models that use custom model_config_dicts that are NOT derived from our gigantic (old) model config?

@@ -100,7 +102,7 @@
# exceeds 150 in evaluation.
stop = {
f"{NUM_ENV_STEPS_SAMPLED_LIFETIME}": 100000,
f"{EVALUATION_RESULTS}/{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}": 150.0,
f"{EVALUATION_RESULTS}/{ENV_RUNNER_RESULTS}/{EPISODE_RETURN_MEAN}": -0.012,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice!

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Where does it roughly start?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It roughly starts at around -0.55 - -0.6

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ray_tune_evaluation_env_runners_agent_episode_returns_mean_default_agent

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Niiice!!

super().reset(seed=seed)

# Randomly initialize the state between -1 and 1
self.state = np.random.uniform(-1, 1, size=(1,))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice that this can be negative, too. Makes sense!

…annot only watch the state but needs to also watch the first action. Furthermore, implemented the 'ValueFunctionAPI' in the 'AutoregressiveActionsRLM' and ran some tests.

Signed-off-by: simonsays1980 <[email protected]>
Signed-off-by: simonsays1980 <[email protected]>
@simonsays1980 simonsays1980 force-pushed the update-autoregressive-actions-setup branch from 5083233 to 2b04f6c Compare September 27, 2024 10:01
@sven1977 sven1977 enabled auto-merge (squash) September 27, 2024 17:34
@github-actions github-actions bot added the go add ONLY when ready to merge, run all tests label Sep 27, 2024
Copy link
Contributor

@sven1977 sven1977 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM! Thanks @simonsays1980 :)

@simonsays1980 simonsays1980 added rllib RLlib related issues rllib-models An issue related to RLlib (default or custom) Models. labels Sep 28, 2024
@sven1977 sven1977 merged commit c8aa7f1 into ray-project:master Sep 30, 2024
5 checks passed
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
ujjawal-khare pushed a commit to ujjawal-khare-27/ray that referenced this pull request Oct 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
go add ONLY when ready to merge, run all tests rllib RLlib related issues rllib-models An issue related to RLlib (default or custom) Models.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RLlib] Connectors API get_actions does not compute action_logp when actions are present.
2 participants