You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the configuration for the ChatGPT CLI uses a singular configuration format for LLMs and models. This works well if you're using only one LLM at a time, but it becomes cumbersome when switching between different providers (OpenAI, Perplexity, Llama, etc.) or models.
For example, the current configuration looks like this:
When switching between providers or models, users have to either edit the configuration file or rely on environment variables. This makes switching between multiple setups more complicated and less efficient.
Proposed Solution
Introduce Array-Based Configuration for LLMs:
Convert the current configuration from a single setup to an array-based format where multiple configurations for different LLMs and models can be stored.
Add a --target Flag to Dynamically Select a Configuration:
Add a --target flag that allows users to select which configuration (provider and model) to use for a specific command.
Example:
chatgpt --target openai "Who is Max Verstappen?"
chatgpt --target llama "Tell me a joke"
chatgpt --target perplexity "Summarize this article"
This way, users can quickly switch between configurations without having to edit the config.yaml file or rely on environment variables.
Benefits
Ease of Use: With array-based configurations and the --target flag, users can easily switch between LLM providers and models.
Cleaner Configuration: Avoids the need for multiple environment variables or manual configuration file edits for each LLM.
Better Flexibility: Supports different LLM providers (OpenAI, Llama, Perplexity, etc.) and models without requiring reconfiguration.
The text was updated successfully, but these errors were encountered:
Problem
Currently, the configuration for the ChatGPT CLI uses a singular configuration format for LLMs and models. This works well if you're using only one LLM at a time, but it becomes cumbersome when switching between different providers (OpenAI, Perplexity, Llama, etc.) or models.
For example, the current configuration looks like this:
When switching between providers or models, users have to either edit the configuration file or rely on environment variables. This makes switching between multiple setups more complicated and less efficient.
Proposed Solution
Introduce Array-Based Configuration for LLMs:
Convert the current configuration from a single setup to an array-based format where multiple configurations for different LLMs and models can be stored.
Example:
Add a
--target
Flag to Dynamically Select a Configuration:Add a
--target
flag that allows users to select which configuration (provider and model) to use for a specific command.Example:
This way, users can quickly switch between configurations without having to edit the
config.yaml
file or rely on environment variables.Benefits
--target
flag, users can easily switch between LLM providers and models.The text was updated successfully, but these errors were encountered: