-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Tune; RLlib] Missing stopping criterion should not error (just warn). #45613
[Tune; RLlib] Missing stopping criterion should not error (just warn). #45613
Conversation
Signed-off-by: sven1977 <[email protected]>
Hey @justinvyu , could you take a look at this PR? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! I think this generally makes sense.
Could we just add a small unit test here, asserting that the warning is logged?
https://github.com/ray-project/ray/blob/master/python/ray/tune/tests/test_trial.py#L5
python/ray/tune/experiment/trial.py
Outdated
elif criterion not in result: | ||
if log_once("tune_trial_stop_criterion_not_found"): | ||
logger.warning( | ||
f"Stopping criterion {criterion} not found in result dict. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a quick note about the behavior that will follow:
"If this metric is never reported, the run will continue running until training is finished instead."
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
…tune_stop_criterium_not_found_erro
…tune_stop_criterium_not_found_erro
Signed-off-by: sven1977 <[email protected]>
Hey @justinvyu , thanks for the review! Could you take another look? Added the test cases and enhanced the warning message. Thanks! :) |
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Seems like the caplog is not working -- you may need to include the propagate_logs
fixture.
python/ray/tune/tests/test_trial.py
Outdated
result = _TrainingResult( | ||
checkpoint=None, metrics={"a": 9.999, "b/c": 0.0, "some_other_key": True} | ||
) | ||
assert not trial.should_stop(result.metrics) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just pass in a dict without the _TrainingResult
wrapper?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Signed-off-by: sven1977 <[email protected]>
…tune_stop_criterium_not_found_erro
Signed-off-by: sven1977 <[email protected]>
…tune_stop_criterium_not_found_erro
Signed-off-by: sven1977 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
…tune_stop_criterium_not_found_erro
Signed-off-by: sven1977 <[email protected]>
Signed-off-by: sven1977 <[email protected]>
ray-project#45613) Signed-off-by: Richard Liu <[email protected]>
Missing stopping criterion should not error (just warn).
Some RLlib algorithms take a few iterations in order to sample enough data from their EnvRunners to complete an RL episode. Only after that point in time, episode stats will be published in the result dicts returned by
Algorithm.train()
(Algorithm
is-atune.Trainable
).The current hacky fix is to prophylactically add common episode stats (such as
episode_return_mean
) to the initialized algorithm's result dict as NaNs, however, this is not a sustailable solution as users might add their own stats and will then run into theTuneError(Stopping criteria ... not provided)
error without any idea how to fix this. A one-time warning here would be the better solution.Why are these changes needed?
Related issue number
Checks
git commit -s
) in this PR.scripts/format.sh
to lint the changes in this PR.method in Tune, I've added it in
doc/source/tune/api/
under thecorresponding
.rst
file.