Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for Ollama open source models #70

Merged
merged 2 commits into from
Jan 24, 2024

Conversation

medoror
Copy link
Contributor

@medoror medoror commented Jan 20, 2024

Adds support for open source models with Ollama which allows the execution of LLM's locally. Per the current version of the API it doesn't seem like function calling is supported yet. See this issue to track on the possible status of function calling

This issue should close #67

Copy link
Owner

@brainlid brainlid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing all the work implement this! There are a few changes needed before merging it.

  • Please refer to this PR where retries for failed Mint connections was added. Req is trying to handle these, but it's currently not enough. The PR does library level retries when we detect that the Mint connection pulled from the pool is already closed and triggering a retry.
  • Having module level docs and docs on important public functions is really helpful for other developers. For the module doc, assume the reader doesn't know what ChatOllamaAI is at all. So a brief overview and links to other external relevant docs is helpful.

quote do
@receive_timeout 60_000 * 5

field :endpoint, :string, default: "http://localhost:11434/api/chat"
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there any support for overriding this endpoint? This default value should be explained in the ChatOllamaAI's module doc along with information or an example of how to override.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will make a note in the module docs on this. The app within the image exposes port 11434 but their is a world where a client may want to deploy the image with a different port mapping thus requiring an override

defmodule LangChain.ChatModels.OllamaAIFields do
defmacro fields do
quote do
@receive_timeout 60_000 * 5
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really a 5 minute timeout? Why so long? Is it really slow to run these models locally? We might want to have the ability to override the timeout in that case.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No clue why. The only information I found about time outs was here: ollama/ollama#1257

Client should be able to override because it is a valid field in the struct


# Enable Mirostat sampling for controlling perplexity.
# (default: 0, 0 = disabled, 1 = Mirostat, 2 = Mirostat 2.0)
field :mirostat, :integer, default: 0
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For all these fields with special docs, it would be helpful to include links to the docs where these come from.

@medoror
Copy link
Contributor Author

medoror commented Jan 23, 2024

@brainlid Addressed those comments. Let me know if I'm missing anything else

@medoror medoror requested a review from brainlid January 24, 2024 00:38
Copy link
Owner

@brainlid brainlid left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor comment tweak suggested. Looks good!

lib/chat_models/chat_ollama_ai.ex Outdated Show resolved Hide resolved
@brainlid
Copy link
Owner

Thanks for the work and your patience!

❤️💛💙💜

@brainlid brainlid merged commit ba2b8d3 into brainlid:main Jan 24, 2024
1 check passed
medoror added a commit to medoror/ollama that referenced this pull request Jan 25, 2024
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
medoror added a commit to medoror/ollama that referenced this pull request Feb 20, 2024
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
jmorganca pushed a commit to ollama/ollama that referenced this pull request Feb 20, 2024
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
zhewang1-intc pushed a commit to zhewang1-intc/ollama that referenced this pull request May 13, 2024
The Elixir LangChain Library now supports Ollama Chat with this [PR](brainlid/langchain#70)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Request for Community ChatModel: ChatOllama
2 participants