Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tune] Incorrect end of trial increment for function API #3834

Closed
richardliaw opened this issue Jan 24, 2019 · 8 comments · Fixed by #3906
Closed

[tune] Incorrect end of trial increment for function API #3834

richardliaw opened this issue Jan 24, 2019 · 8 comments · Fixed by #3906
Assignees

Comments

@richardliaw
Copy link
Contributor

Seems to be an indexing problem when running experiments with the functional API. From the mailing list:

I'm running with a simple trainable function (so no Trainable sub-classing) but I still seem to have every trial run have '2' as the iteration count, e.g. TERMINATED [pid=3942], 327 s, 2 iter <--- how is this possible? Surely there's only 'one' iteration per trial to be done if I'm not doing any checkpointing etc?

@richardliaw
Copy link
Contributor Author

Proposed fix for this is to improve documentation (no way to detect otherwise).

@gehring
Copy link
Contributor

gehring commented Feb 2, 2019

Could this be caused by the fact that logging faster that the function API will only log at most once per second, dropping logs given to the reporter if logging faster than that?

@richardliaw
Copy link
Contributor Author

Sort of but not exactly; basically the last_result is provided twice (once when invoked, and then once again when the StatusReporter._done attribute is set to True. There are cases where if the reporter ends quickly enough such that last_result is only provided once.

I think the main workaround here is that the user should set done when the training completes.

@gehring
Copy link
Contributor

gehring commented Feb 2, 2019

This question might be better in a standalone issue, but why is the function runner implementation so complex? Is there a reason why a multi-threaded solution with sleeps was used?

I think I might be able to propose/contribute an alternative design that would unify the logging behavior of the function API and the trainable API (and simplify function_runner.py) but I want to make sure I understand a bit better what led to this implementation.

@richardliaw
Copy link
Contributor Author

@gehring Sorry missed your message - the function runner allows users to quickly integrate their original training loops into Tune without restructuring their code and only adding a reporter call. Does that make sense? Let me know what you had in mind

@richardliaw richardliaw reopened this Feb 4, 2019
@gehring
Copy link
Contributor

gehring commented Feb 4, 2019

@richardliaw Thanks for getting back to me! First, let me say that I am a big fan of the function runner API and approach. The API is simple and powerful. However, there is a lot of complexity in the runner that end up adding a lot of counter intuitive behavior which I believe to be easy to fix but not without completely re-writing the runner.

The simplest but most significant change would be to drop the reporter all together and go for a generator based solution. If we have users yield a dict of results in the same way a Trainable would return, then wrapping an arbitrary function in a Trainable becomes pretty simple. Below is a skeleton of what I had in mind.

The function's api remains simple, only removing the reporter:

def trainable(config):
  """
  Args:
      config (dict): Parameters provided from the search algorithm
          or variant generation.
      reporter (Reporter): Handle to report intermediate metrics to Tune.
  """

  while True:
      # ...
      yield my_results

The above example should cover most (if not all) the intended use cases for the function API. The function runner is straightforward. Copying the overriding of _trainable_func approach from the current runner, we have:

class FunctionRunner(Trainable):

  def _setup(self, config):
    self._func_config = config.copy()
    self._generator = None
    
  def _trainable_func(self):
    """Subclasses can override this to set the trainable func."""
    
    raise NotImplementedError
    
  def _train(self):
    if self._generator is None
      self._generator = self._trainable_func(self._func_config)
    try:
      result = next(self._generator)
      return result
    except StopIteration:
      return {"done": True}

Let me know if there are some use cases I might be forgetting. I'd be happy to contribute a fully fleshed out implementation if this is a direction you are interested in pursuing!

EDIT: fixed runner to using the return statement based logic of Trainable. I accidentally used it as a generator.

@richardliaw
Copy link
Contributor Author

richardliaw commented Feb 5, 2019 via email

@gehring
Copy link
Contributor

gehring commented Feb 5, 2019

Great! I opened a standalone issue (#3956) so we can close this one. I will probably be able to start writing a prototype this week. I wouldn't expect serialization to be more harder than what it is in the current solution so hopefully I don't encounter any weird issues!

richardliaw pushed a commit that referenced this issue Mar 19, 2019
#4011)

## What do these changes do?

This is a re-implementation of the `FunctionRunner` which enforces some synchronicity between the thread running the training function and the thread running the Trainable which logs results. The main purpose is to make logging consistent across APIs in anticipation of a new function API which will be generator based (through `yield` statements). Without these changes, it will be impossible for the (possibly soon to be) deprecated reporter based API to behave the same as the generator based API.

This new implementation provides additional guarantees to prevent results from being dropped. This makes the logging behavior more intuitive and consistent with how results are handled in custom subclasses of Trainable.

New guarantees for the tune function API:

- Every reported result, i.e., `reporter(**kwargs)` calls, is forwarded to the appropriate loggers instead of being dropped if not enough time has elapsed since the last results.
- The wrapped function only runs if the `FunctionRunner` expects a result, i.e., when `FunctionRunner._train()` has been called. This removes the possibility that a result will be generated by the function but never logged.
- The wrapped function is not called until the first `_train()` call. Currently, the wrapped function is started during the setup phase which could result in dropped results if the trial is cancelled between `_setup()` and the first `_train()` call.
- Exceptions raised by the wrapped function won't be propagated until all results are logged to prevent dropped results.
- The thread running the wrapped function is explicitly stopped when the `FunctionRunner` is stopped with `_stop()`.
- If the wrapped function terminates without reporting `done=True`, a duplicate result with `{"done": True}`, is reported to explicitly terminate the trial, and components will be notified with a duplicate of the last reported result, but this duplicate will not be logged.

## Related issue number

Closes #3956.
#3949
#3834
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants