-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add ollama caching #297
Comments
Hey Kai, as we discussed over chat, vLLM is typically the go-to for serving concurrent production traffic. Does that work for you, or is ollama caching still important for you? |
Hey Nick, Ollama is just easier because you don't need an account with a token and have to join the corresponding models. From my point of view, this makes it easier to get started. So i still think it would make sense to support caching for ollama as well. |
Sounds good. Thank you. |
https://github.com/substratusai/kubeai/blob/main/api/v1/model_types.go#L24
The text was updated successfully, but these errors were encountered: