Skip to content

Commit

Permalink
doc update
Browse files Browse the repository at this point in the history
  • Loading branch information
xiaowu0162 committed Jul 13, 2021
1 parent 2ec6393 commit 2eff6f7
Showing 1 changed file with 5 additions and 4 deletions.
9 changes: 5 additions & 4 deletions docs/en_US/hpo_benchmark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,18 +4,19 @@ HPO Benchmarks
.. toctree::
:hidden:

HPO Benchmark Example Statistics <hpo_benchmark_statistics>
HPO Benchmark Example Statistics <hpo_benchmark_stats>

We provide a benchmarking tool to compare the performances of tuners provided by NNI (and users' custom tuners) on different tasks. The implementation of this tool is based on the automlbenchmark repository (https://github.com/openml/automlbenchmark), which provides services of running different *frameworks* against different *benchmarks* consisting of multiple *tasks*. The tool is located in ``examples/trials/benchmarking/automlbenchmark``. This document provides a brief introduction to the tool and its usage.
We provide a benchmarking tool to compare the performances of tuners provided by NNI (and users' custom tuners) on different
types of tasks. This tool uses the `automlbenchmark repository <https://github.com/openml/automlbenchmark)>`_ to run different *benchmarks* on the NNI *tuners*.
The tool is located in ``examples/trials/benchmarking/automlbenchmark``. This document provides a brief introduction to the tool and its usage.

Terminology
^^^^^^^^^^^


* **task**\ : a task can be thought of as (dataset, evaluator). It gives out a dataset containing (train, valid, test), and based on the received predictions, the evaluator evaluates a given metric (e.g., mse for regression, f1 for classification).
* **benchmark**\ : a benchmark is a set of tasks, along with other external constraints such as time and resource.
* **framework**\ : given a task, a framework conceives answers to the proposed regression or classification problem and produces predictions. Note that the automlbenchmark framework does not pose any restrictions on the hypothesis space of a framework. In our implementation in this folder, each framework is a tuple (tuner, architecture), where architecture provides the hypothesis space (and search space for tuner), and tuner determines the strategy of hyperparameter optimization.
* **tuner**\ : a tuner or advisor defined in the hpo folder, or a custom tuner provided by the user.
* **tuner**\ : a `tuner or advisor <https://nni.readthedocs.io/en/stable/builtin_tuner.html>`_ provided by NNI, or a custom tuner provided by the user.
* **architecture**\ : an architecture is a specific method for solving the tasks, along with a set of hyperparameters to optimize (i.e., the search space). In our implementation, the architecture calls tuner multiple times to obtain possible hyperparameter configurations, and produces the final prediction for a task. See ``./nni/extensions/NNI/architectures`` for examples.

Note: currently, the only architecture supported is random forest. The architecture implementation and search space definition can be found in ``./nni/extensions/NNI/architectures/run_random_forest.py``. The tasks in benchmarks "nnivalid" and "nnismall" are suitable to solve with random forests.
Expand Down

0 comments on commit 2eff6f7

Please sign in to comment.