How can i connect privateGPT to my local llama API? #1983
-
I have an Ollama instance running on one of my servers. I've managed to get PrivateGPT up and running, but how can I configure it to use my local Llama3 model on the server instead of downloading a model? I was following the setup instructions. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Managed to solve this, go to settings.py under private_gpt/settings, scroll down to line 223 and change the API url. `class OllamaSettings(BaseModel): ` |
Beta Was this translation helpful? Give feedback.
Managed to solve this, go to settings.py under private_gpt/settings, scroll down to line 223 and change the API url.
`class OllamaSettings(BaseModel):
api_base: str = Field(
"ollama_url",
description="Base URL of Ollama API. Example: 'http://localhost:1434'.",
)
embedding_api_base: str = Field(
"ollama_url",
description="Base URL of Ollama embedding API. Example: 'http://localhost:11434'.",
)
llm_model: str = Field(
None,
description="Model to use. Example: 'llama2-uncensored'.",
)
embedding_model: str = Field(
None,
description="Model to use. Example: 'nomic-embed-text'.",
)
keep_alive: str = Field(
"5m",
description="Time the model will stay loaded in memory after a request. examples: 5…