Skip to content

Commit

Permalink
(documentation) grammatical fixes in trial-evaluation.md (facebook#1652)
Browse files Browse the repository at this point in the history
Summary:
The changes made are grammatical and do not affect the ideas communicated in the file.

Pull Request resolved: facebook#1652

Reviewed By: bernardbeckerman

Differential Revision: D46581530

Pulled By: Balandat

fbshipit-source-id: 928601c8259d3c7417079ff2cb4e219cdf386d3c
  • Loading branch information
LiamSwayne authored and facebook-github-bot committed Jun 9, 2023
1 parent 363d0d9 commit 516ad32
Showing 1 changed file with 5 additions and 5 deletions.
10 changes: 5 additions & 5 deletions docs/trial-evaluation.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,16 +34,16 @@ For example, this evaluation function computes mean and SEM for [Hartmann6](http
from ax.utils.measurement.synthetic_functions import hartmann6
def hartmann_evaluation_function(parameterization):
x = np.array([parameterization.get(f"x{i+1}") for i in range(6)])
# Standard error is 0, since we are computing a synthetic function.
# Standard error is 0 since we are computing a synthetic function.
return {"hartmann6": (hartmann6(x), 0.0), "l2norm": (np.sqrt((x ** 2).sum()), 0.0)}
```

This function computes just the objective mean and SEM, assuming the [Branin](https://www.sfu.ca/~ssurjano/branin.html) function is the objective on the experiment:
This function computes just the objective mean and SEM, assuming the [Branin](https://www.sfu.ca/~ssurjano/branin.html) function is the objective of the experiment:

```python
from ax.utils.measurement.synthetic_functions import branin
def branin_evaluation_function(parameterization):
# Standard error is 0, since we are computing a synthetic function.
# Standard error is 0 since we are computing a synthetic function.
return (branin(parameterization.get("x1"), parameterization.get("x2")), 0.0)
```

Expand All @@ -69,7 +69,7 @@ It can also accept a `weight` parameter, a nullable `float` representing the fra

The Developer API is supported by the [```Experiment```](/api/core.html#module-ax.core.experiment) class. In this paradigm, the user specifies:
* [`Runner`](../api/core.html#ax.core.runner.Runner): Defines how to deploy the experiment.
* List of [`Metrics`](../api/core.html#ax.core.metric.Metric): Each defining how to compute/fetch data for a given objective or outcome.
* List of [`Metrics`](../api/core.html#ax.core.metric.Metric): Each defines how to compute/fetch data for a given objective or outcome.

The experiment requires a `generator_run` to create a new trial or batch trial. A generator run can be generated by a model. The trial then has its own `run` and `mark_complete` methods.
```python
Expand All @@ -90,7 +90,7 @@ for i in range(15):

### Custom Metrics

Similar to trial evaluation in the Service API, a custom metric computes a mean and SEM for each arm of a trial. However, the metric's `fetch_trial_data` method will be called automatically by the experiment's [```fetch_data```](/api/core.html#ax.core.base_trial.BaseTrial.fetch_data) method. If there are multiple objetives or outcomes that need to be optimized for, each needs its own metric.
Similar to a trial evaluation in the Service API, a custom metric computes a mean and SEM for each arm of a trial. However, the metric's `fetch_trial_data` method will be called automatically by the experiment's [```fetch_data```](/api/core.html#ax.core.base_trial.BaseTrial.fetch_data) method. If there are multiple objectives or outcomes that need to be optimized for, each needs its own metric.

```python
class MyMetric(Metric):
Expand Down

0 comments on commit 516ad32

Please sign in to comment.