-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[RLlib] Remove return_info
from reset()
in pettingzoo_env.py
.
#33470
Conversation
c0b2090
to
280c040
Compare
The DCO check failed due to commits being not signed, but I have My only thought is that maybe the internal tests are using an older version of gymnasium/pettingzoo which still has return_info. Edit: I found this file for rllib with version 1.22.1, whereas 1.22.3 is the most widely used and 1.22.4 is the current release as of today. Edit2: I just now tried doing git commit -S and still got the DCO error (lowercase s, my bad), so need to rebase again |
a1a6aeb
to
25b591f
Compare
@ArturNiederfahrenhorst any chance you could help look this over? I would like to use RLlib with the most recent PettingZoo release but this makes it impossible without locally modifying it |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the fix @elliottower !
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good to me. Can you pull ray/master into your pr that way we can see the latest tests run?
@@ -154,7 +154,7 @@ def close(self): | |||
self.env.close() | |||
|
|||
def render(self): | |||
return self.env.render(self.render_mode) | |||
return self.env.render() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in the newest petting zoo version, are render modes no longer configurable?
if so the whole pr looks p much looks good to me
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to specify the render mode nowadays in your gym.make call and specify available modes in Env.meta_data["render_modes"] = ["human", "rgb_array", ...]
:
https://gymnasium.farama.org/api/env/#gymnasium.Env.render
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You need to specify the render mode nowadays in your gym.make call and specify available modes in
Env.meta_data["render_modes"] = ["human", "rgb_array", ...]
: https://gymnasium.farama.org/api/env/#gymnasium.Env.render
PettingZoo doesn't use gym.make but yeah it has to be specified on initialization of the environment, rather than in the render() call.
oh and please run the linter on your changes ./ci/lint/format.sh |
@@ -15,7 +15,7 @@ kaggle_environments==1.7.11 | |||
#mlagents==0.28.0 | |||
mlagents_envs==0.28.0 | |||
# For tests on PettingZoo's multi-agent envs. | |||
pettingzoo==1.22.1; python_version >= '3.7' | |||
pettingzoo==1.22.4; python_version >= '3.7' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we also need to update the gymnasium version to make these work?
What about supersuit? Let's also upgrade here, otherwise, this could lead to further problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should actually be able to bump the gym version with no problems here. Can you please do so @elliottower. I'd be happy to assist you in the upgrade if any issues come up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay I'll change it to the most recent versions of all of them. Gymnasium had a release 28 late last week and 28.1 a day or two later with hotfixes. The update made a slight change to the Spaces, such as Sequence action space, which led to an error in the PettingZoo CI, which I've now fixed (previous spaces.Sequence() calls need to specify spaces.Sequence(stack=True) to get the same behavior as before). I've fixed that issue and the PettingZoo CI tests there are now passing. I did a search and found no uses of spaces.Sequence in Ray's repo so I think that should be fine, but I can do the pytests and try to find other potential issues with gymnasium 28.1.
We are finishing up a PettingZoo release to fix this gym 28.1, which we are hoping to get out by the end of this week. I can run the tests with the master branch of PettingZoo to ensure they pass and start debugging errors with them now, and then once the full release is out then I can change the requirements to list the new version of PettingZoo.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome PR. Still some minor fixes to do before we can merge this. See my comments.
Thanks again! :)
Could you help me diagnose the problems with the tests which are failing? It says 100 successful and 11 failing. I'm running the pytest for pettingzoo locally to try and see if I can find any issue but can't see anything wrong, not sure why my change would affect things like ci-build-pr/java |
Also I'm trying to install the repository locally and having issues, I ran
|
I think the only two tests that are of relevance are those that are failing with this error here:
It does seem like a supersuit/pettingzoo mismatch issue. ?? |
Ah yeah this was a typo which was fixed in the most recent pettingzoo update, thanks for pointing it out. I’ll work on getting those issues fixed. Going to wait to finalize this until we release the next major release though. |
return_info
from reset()
in pettingzoo_env.py
.
* Revert "[Datasets] Revert "Enable streaming executor by default (ray-project#32493)" (ray-project#33485)" This reverts commit 5c79954. * Fix tensorarray to numpy conversion Signed-off-by: elliottower <[email protected]>
…g executor (ray-project#34120) * Revert "[Datasets] Revert "Enable streaming executor by default (ray-project#32493)" (ray-project#33485)" This reverts commit 5c79954. * Fix test failure caused by lack of ordring in default streaming executor Signed-off-by: elliottower <[email protected]>
cluster_tune_scale_up_down long_running_horovod_tune_test Signed-off-by: Kai Fricke <[email protected]> Signed-off-by: elliottower <[email protected]>
Even though we have perf regression on v2 stack but at least they can run. Currently starting 65 nodes has very low success rate on v1 stack. Signed-off-by: elliottower <[email protected]>
Going to do a new PR because this one seems to have gotten messed up from a re-signing issue |
Why are these changes needed?
The
return_info
reset() parameter has been removed from the PettingZoo API for a few weeks: Farama-Foundation/PettingZoo#890Render() also has not taken the render_mode argument since October of 2022: Farama-Foundation/PettingZoo@a74a933
Motivation is that current pettingzoo version cannot run with RLlib without modifying these files locally to get rid of return_info (have tested with both parallel and aec envs, see Farama-Foundation/PettingZoo#899)
Related issue number
Closes #32889
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.