-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Documentation: Custom models #2463
Conversation
docs/custom_models.md
Outdated
|
||
To create a local model deployments configuration file, determine the location of your local configuration folder. If you are using the `--local-path` flag with `helm-run`, the specified folder of the flag is the local configuration folder. Otherwise, the local configuration folder is `./prod_env/` under your current working directory by default. | ||
|
||
Create a file called `model_deployments.yaml` underneath that directory (e.g. `./prod_env/model_deployments.yaml`). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe call this custom_model_deployments.yaml
or something, since model_deployements.yaml
is already a filename, just to avoid a clash.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This requires a code change, so I'll defer it.
# Custom Models | ||
|
||
HELM comes with more than a hundred built-in models. If you want to run a HELM evaluation on a model that is not built-in, you can configure HELM to add your own model. This also allows you to evaluate private models that not publicly accessible, such as a model checkpoint on local disk, or a model server on a private network | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Give a concrete example where you would need to override the class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
docs/custom_models.md
Outdated
|
||
## Creating a model deployments configuration | ||
|
||
To create a local model deployments configuration file, determine the location of your local configuration folder. If you are using the `--local-path` flag with `helm-run`, the specified folder of the flag is the local configuration folder. Otherwise, the local configuration folder is `./prod_env/` under your current working directory by default. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe give some motivation for why create model deployment and what they are.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added definition of deployment.
docs/custom_models.md
Outdated
@@ -0,0 +1,104 @@ | |||
# Custom Models | |||
|
|||
HELM comes with more than a hundred built-in models. If you want to run a HELM evaluation on a model that is not built-in, you can configure HELM to add your own model. This also allows you to evaluate private models that not publicly accessible, such as a model checkpoint on local disk, or a model server on a private network |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that not publicly
-> that are not publicly
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good catch, fixed.
docs/custom_models.md
Outdated
|
||
If you wish to evaluate a model not covered by an existing `Client` and `Tokenizer`, you can implement your own `Client` and `Tokenizer` subclasses. Instructions for adding custom `Client` and `Tokenizer` subclasses will be added to the documentation in the future. | ||
|
||
## Creating a model deployments configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
deployments
should be singular here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed.
docs/custom_models.md
Outdated
|
||
If you wish to evaluate a model not covered by an existing `Client` and `Tokenizer`, you can implement your own `Client` and `Tokenizer` subclasses. Instructions for adding custom `Client` and `Tokenizer` subclasses will be added to the documentation in the future. | ||
|
||
## Creating a model deployments configuration |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it clear what a deployment is?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added definition of deployment.
docs/custom_models.md
Outdated
|
||
Note: If `together_model` is omitted, the Together model with `model_name` (_not_ `name`) will be used by default. | ||
|
||
Note: This model may not be currently available on Together AI. Consult [Together AI's Inference Models documentation](https://docs.together.ai/docs/inference-models) for a list of currently available models and corresponding model strings. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a note to remember to register Together credentials?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added.
docs/custom_models.md
Outdated
helm-run --run-entries mmlu:subject=anatomy,model=eleutherai/pythia-70m --suite my_suite --max-eval-instances 5 | ||
``` | ||
|
||
Note: This uses Hugging Face local inference. It will attempt to use GPU inference if available, and use CPU inference otherwise. It is only able to use the first GPU. Multi-GPU inference is not supported. Every model needed by `helm-run` will be loaded on the same GPU - if evaluating multiple models, it is prudent to evaluate each model with a separate `helm-run` invocation. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"Hugging Face" -> "Hugging Face's" or "the Hugging Face"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed: "local inference with Hugging Face".
667860a
to
e39cea9
Compare
This will eventually replace Adding new models because it documents the new way of adding models that does not require forking HELM.
Fixes #2379