-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Finish testing matthewearl's Gaussian squashed gaussian PR #13292
Conversation
Still some bugs to fix
…sian_squashed_gaussian
…sian_squashed_gaussian
…sian_squashed_gaussian
…sian_squashed_gaussian # Conflicts: # rllib/models/catalog.py
Is there any plan to finalize this PR? Or, alternatively, is there any way to use a fixed value of the variance of the policy distribution? (perhaps even using the |
|
This pull request has been automatically marked as stale because it has not had recent activity. It will be closed in 14 days if no further activity occurs. Thank you for your contributions.
|
Hi again! The issue will be closed because there has been no more activity in the 14 days since the last message. Please feel free to reopen or open a new issue if you'd still like it to be addressed. Again, you can always ask for help on our discussion forum or Ray's public slack channel. Thanks again for opening the issue! |
This is a follow up PR on Matthew Earl's PR on adding a GaussianSquashedGaussian distribution (which supports entropy and KL methods) to be used for PPO.
#7609
Why are these changes needed?
Related issue number
Checks
scripts/format.sh
to lint the changes in this PR.