Skip to content

Latest commit

 

History

History
182 lines (127 loc) · 5.79 KB

README.md

File metadata and controls

182 lines (127 loc) · 5.79 KB

Ollama4j

ollama4j-icon

A Java library (wrapper/binding) for Ollama server.

Find more details on the website.

GitHub stars GitHub forks GitHub watchers GitHub repo size GitHub language count GitHub top language GitHub last commit Hits

codecov

Build Status

Table of Contents

How does it work?

  flowchart LR
    o4j[Ollama4j]
    o[Ollama Server]
    o4j -->|Communicates with| o;
    m[Models]
    subgraph Ollama Deployment
        direction TB
        o -->|Manages| m
    end
Loading

Requirements

Java

Or

Installation

In your Maven project, add this dependency:

<dependency>
    <groupId>io.github.amithkoujalgi</groupId>
    <artifactId>ollama4j</artifactId>
    <version>1.0.70</version>
</dependency>

or

In your Gradle project, add the dependency using the Kotlin DSL or the Groovy DSL:

dependencies {

    val ollama4jVersion = "1.0.70"

    implementation("io.github.amithkoujalgi:ollama4j:$ollama4jVersion")
}
dependencies {
    implementation("io.github.amithkoujalgi:ollama4j:1.0.70")
}

Latest release:

Maven Central

API Spec

Find the full API specifications on the website.

Development

Build:

make build

Run unit tests:

make ut

Run integration tests:

make it

Releases

Releases (newer artifact versions) are done automatically on pushing the code to the main branch through GitHub Actions CI workflow.

Who's using Ollama4j?

Traction

Star History Chart

Areas of improvement

  • Use Java-naming conventions for attributes in the request/response models instead of the snake-case conventions. ( possibly with Jackson-mapper's @JsonProperty)
  • Fix deprecated HTTP client code
  • Setup logging
  • Use lombok
  • Update request body creation with Java objects
  • Async APIs for images
  • Add custom headers to requests
  • Add additional params for ask APIs such as:
    • options: additional model parameters for the Modelfile such as temperature - Supported params.
    • system: system prompt to (overrides what is defined in the Modelfile)
    • template: the full prompt or prompt template (overrides what is defined in the Modelfile)
    • context: the context parameter returned from a previous request, which can be used to keep a short conversational memory
    • stream: Add support for streaming responses from the model
  • Add test cases
  • Handle exceptions better (maybe throw more appropriate exceptions)

Get Involved

Contributions are most welcome! Whether it's reporting a bug, proposing an enhancement, or helping with code - any sort of contribution is much appreciated.

Credits

The nomenclature and the icon have been adopted from the incredible Ollama project.

References