-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider adding a flag to change model temporarily #68
Comments
I like this idea. Maybe the configuration should be an array instead of a single config. Then you can "target" a different model on the fly. So you may have different llama, perplexity and chatgpt settings and just switch the model you are targeting rather than having to reset all values. This will be a little tedious to implement so it may take a minute. Great idea though. |
@woerndl I totally misread this. For a long time I have been thinking about having a quick way to set the LLM. Not the model :) -- So set llama, perplexity, openai etc. Anyway, you can already set the model like this:
Is this something that works for you? I am down to create a flag as well if that's easier. It would not be hard to implement. |
FYI; Created a new issue here for the LLM targeting: #74 |
Fixed in this commit: 1ad9eeb Release: https://github.com/kardolus/chatgpt-cli/releases/tag/v1.7.0 |
Thank you for your effort, Guillermo. In my opinion, runtime value overrides are a great addition, especially when working in the terminal where all commands now start with |
Thanks for the kind words! I appreciate it. Glad you are liking the new flags. |
I run almost all of my commands with one of the simple models, but for more complex tasks I sometimes want to use a more advanced model. If I were to use
--set-model
I would have to add it to every command I run (to set and reset), changing theconfig.yaml
way more than necessary. Do you think it would make sense to add a flag like--model
or--use-model
for single command usage that differs from the default model in the config?The text was updated successfully, but these errors were encountered: