Skip to content

DimaBrody/LangDroid

Repository files navigation

Langdroid | Summaries for Kotlin and Android

License

This Kotlin Multiplatform library is motivated by 🦜 LangChain.

Main idea is to use general model LangDroidModel which implements functionality of selected LLMs (Large Language Models, currently OpenAI and Gemini are available) under the hood.

Example of summary of Wikipedia article about Immanuel Kant written by OpenAI GPT 3.5:

Sample App Gif

This is a sample app which has example of Langdroid Summary implementation.

⚙️ Setup

Install Langdroid summary by adding the following dependency to your build.gradle file:

repositories {
    mavenCentral()
    maven { url 'https://jitpack.io' }
}

dependencies {
    implementation "com.github.DimaBrody.LangDroid:summary:0.3.0"
}

If you need only text output and token calcuation functionality, you can use :core module instead of :summary:

dependencies {
    // :summary module contains it and shares its functionality by default
    implementation "com.github.DimaBrody.LangDroid:core:0.3.0"
}

Multiplatform

:core and :summary implementations are available for Kotlin (Gemini is not implemented for JVM).

💻 Getting Started

Note

It's best practice to use environment variables for storing API keys. For guidance, refer to this Kotlin tutorial on reading environment variables and learn how to implement the Google Secrets Gradle plugin for Android.

1. Langdroid Model

You have to create LangDroidModel<*>, which requires API key and contains text completion functionality:

// This variable set with google secrets plugin
val openAiKey = BuildConfig.OPENAI_API_KEY

val model = LangDroidModel(
    OpenAiModel.Gpt3_5Plus(openAiKey)
)

// Google Gemini models are also available: 
// GeminiModel.Pro(geminiApiKey) 

You can update GenerativeConfig for selected model (more about LLM configuration):

val model = LangDroidModel(
	OpenAiModel.Gpt3_5Plus(openAiKey),
	GenerativeConfig.create { 
        temperature = 0.2f
        topP = 0.8f
		maxOutputTokens = 1024
	}
)
Available functionality of Langdroid model now
  • generateText(String | List<ChatPrompt>) : Result<String>
  • generateTextStream(String | List<ChatPrompt>) : Result<Flow<String>> - flow of chat outputs
  • calculateTokens(List<ChatPrompt>) : Result<Int>
  • sanityCheck() : Boolean - returns true if API key is valid and there is no problem with model

2. Summary chain

Create a chain which consumes text and produces output states about text summarization.

How it works

summary prompt = input prompt + input text to summary + expected maxOutputTokens.

  • If summary prompt is larger then Context Window available for selected model, the whole text is being split by smaller chunks which thus will fit the context window and summarized. Then these summarized chunks are map-reduced to create final summary (LangChain Map Reduce).
  • If summary prompt is small enough to fit the Context Window, then it will be summarized directly (LangChain Stuff Chain).

Implementation of the summary chain:

val summaryChain = SummaryChain(model)

// You can invoke and get flow of states:
val summaryFlow = summaryChain.invokeAndGetFlow(text)

// Function is suspend until summary task is completed
summaryFlow.collectUntilFinished { state ->
    when(state){
        is SummaryState.Idle -> { /* Nothing happens */ }
        is SummaryState.TextSplitting -> { /* When text is too large and being split */ }
        is SummaryState.Reduce -> { /* Reducing text by summarizing chunks of it; `state.processedChunks, state.allChunks`*/ }
        is SummaryState.Summarizing -> { /* Can be returned when content is being summarized and isStream = false */ }
        is SummaryState.Output -> { /* `state.text`;
            isStream = true: returns pieces of outputs like ... Output("Hel"), Output("lo, how"), Output(" are you?");
            isStream = false: returns the whole text Output("...")
        */ }
        is SummaryState.Success -> { /* Summary has finished successfully */ }
        is SummaryState.Failure -> { /* Summary has failed; `state.t as Throwable` */ }
    }
}
  • Connect to chain state producer
  • Invoke the chain and pass the text
There are also other ways connect and invoke summary chain
// Chain can be invoked and observed directly:
summaryChain.invokeAndObserve(text){ state -> 
	...
}

// Or you can separate invoke and state consuming (pay attenion to not create 2+ observers)
// Create live data if you are using Android:
val liveData = summaryChain.liveData()
liveData.observe { state ->
	...
}
// Or access flow of chain directly
summaryChain.processingState.collect { state ->
	...
}
// (!) But don't forget to call suspending summaryChain invoke() to start process:
summaryChain(text)

3. (Optional) Set you own prompts and other settings to chain

Custom prompts require {text} element as place where input text will be placed

// IMPORTANT! Use {text} in your prompts for places where prompt has to be pasted during processing
private const val WIKIPEDIA_FINAL_PROMPT = """
Write a very detailed summary of the Wikipedia page, the following text delimited by triple backquotes.
Return your response with bullet points which covers the most important key points of the text, sequentially and coherently.
```{text}```
BULLET POINT SUMMARY:
"""

// Default prompts are used if "null" or nothing passed
val promptsAndMessage = PromptsAndMessage(
    // System message added before all the messages and always noticed by LLM
    systemMessage = "You are the Wikipedia oracle",
    // Prompt for final chunk mapping and overall summarization
    finalPrompt = WIKIPEDIA_FINAL_PROMPT,
    // We ignore chunkPrompt here, therefore default will be used
)

val summaryChain = SummaryChain(
    model = model,
    // You can get output as stream or as final text result
    isStream = false,
    promptsAndMessage = promptsAndMessage
)

🛠️ Library Development

Initially there were 2 modules for my Science App, which summarizes arxiv scientific papers. As this library worked fine for me, I've decided to publish it to GitHub, in case someone needs same functionality, even though there is a lot to develop to cover the majority of topics the LLM can do for user, especially as it's done by 🦜 LangChain or Facebook LlamaIndex.

There are a lot of functionality to develop, the priority is:

  • Give possibility to use different models for chunk summary and final summary
  • Extend settings to set chunk size and splitter type
  • Add Anthropic Claude model
  • Add HuggingFace API and Custom Models setup
  • Improve Logging and Error exceptions clarity
  • Extend it to iOS system

In case this library will be useful to developers, I will look for time to implement above functionality and fix issues if they emerged. You can star/fork to show your interest and write me on @Dima_Brody Telegram to suggest your ideas.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Languages