-
Notifications
You must be signed in to change notification settings - Fork 5.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[tune] Incorrect end of trial increment for function API #3834
Comments
Proposed fix for this is to improve documentation (no way to detect otherwise). |
Could this be caused by the fact that logging faster that the function API will only log at most once per second, dropping logs given to the reporter if logging faster than that? |
Sort of but not exactly; basically the I think the main workaround here is that the user should set |
This question might be better in a standalone issue, but why is the function runner implementation so complex? Is there a reason why a multi-threaded solution with sleeps was used? I think I might be able to propose/contribute an alternative design that would unify the logging behavior of the function API and the trainable API (and simplify function_runner.py) but I want to make sure I understand a bit better what led to this implementation. |
@gehring Sorry missed your message - the function runner allows users to quickly integrate their original training loops into Tune without restructuring their code and only adding a reporter call. Does that make sense? Let me know what you had in mind |
@richardliaw Thanks for getting back to me! First, let me say that I am a big fan of the function runner API and approach. The API is simple and powerful. However, there is a lot of complexity in the runner that end up adding a lot of counter intuitive behavior which I believe to be easy to fix but not without completely re-writing the runner. The simplest but most significant change would be to drop the reporter all together and go for a generator based solution. If we have users The function's api remains simple, only removing the reporter: def trainable(config):
"""
Args:
config (dict): Parameters provided from the search algorithm
or variant generation.
reporter (Reporter): Handle to report intermediate metrics to Tune.
"""
while True:
# ...
yield my_results The above example should cover most (if not all) the intended use cases for the function API. The function runner is straightforward. Copying the overriding of class FunctionRunner(Trainable):
def _setup(self, config):
self._func_config = config.copy()
self._generator = None
def _trainable_func(self):
"""Subclasses can override this to set the trainable func."""
raise NotImplementedError
def _train(self):
if self._generator is None
self._generator = self._trainable_func(self._func_config)
try:
result = next(self._generator)
return result
except StopIteration:
return {"done": True} Let me know if there are some use cases I might be forgetting. I'd be happy to contribute a fully fleshed out implementation if this is a direction you are interested in pursuing! EDIT: fixed runner to using the return statement based logic of |
Hm, this looks promising! I think the only thing is seeing whether or not
you can actually serialize the generator and pass it around to multiple
processes/machines.
That being said, if there's no weird gotchas, a prototype would be awesome!
It looks way cleaner (and obviously we would want to maintain support for
the reporter and Trainable for backwards compat).
Richard
…On Mon, Feb 4, 2019 at 1:30 PM gehring ***@***.***> wrote:
@richardliaw <https://github.com/richardliaw> Thanks for getting back to
me! First, let me say that I am a big fan of the function runner API and
approach. The API is simple and powerful. However, there is a lot of
complexity in the runner that end up adding a lot of counter intuitive
behavior which I believe to be easy to fix but not without completely
re-writing the runner.
The simplest but most significant change would be to drop the reporter all
together and go for a generator based solution. If we have users yield a
dict of results in the same way a Trainable would return, then wrapping
an arbitrary function in a Trainable becomes pretty simple. Below is a
skeleton of what I had in mind.
The function's api remains simple, only removing the reporter:
def trainable(config):
""" Args: config (dict): Parameters provided from the search algorithm or variant generation. reporter (Reporter): Handle to report intermediate metrics to Tune. """
while True:
# ...
yield my_results
The above example should cover most (if not all) the intended use cases
for the function API. The function runner is straightforward. Copying the
overriding of _trainable_func approach from the current runner, we have:
class FunctionRunner(Trainable):
def _setup(self, config):
self._func_config = config.copy()
def _trainable_func(self):
"""Subclasses can override this to set the trainable func."""
raise NotImplementedError
def _train(self):
for result in self._trainable_func(self._func_config):
if result.get("done"):
return result
else:
yield result
Let me know if there are some use cases I might be forgetting. I'd be
happy to contribute a fully fleshed out implementation if this is a
direction you are interested in pursuing!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3834 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEUc5RnPtWYtZIIDkL09K_OwleANYfQjks5vKKZngaJpZM4aP9Vc>
.
|
Great! I opened a standalone issue (#3956) so we can close this one. I will probably be able to start writing a prototype this week. I wouldn't expect serialization to be more harder than what it is in the current solution so hopefully I don't encounter any weird issues! |
#4011) ## What do these changes do? This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API. This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable. New guarantees for the tune function API: - Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results. - The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged. - The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call. - Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results. - The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`. - If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged. ## Related issue number Closes #3956. #3949 #3834
Seems to be an indexing problem when running experiments with the functional API. From the mailing list:
The text was updated successfully, but these errors were encountered: