Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LangChain support for Orchestration Client #176

Open
kay-schmitteckert opened this issue Sep 25, 2024 · 0 comments
Open

LangChain support for Orchestration Client #176

kay-schmitteckert opened this issue Sep 25, 2024 · 0 comments
Labels
feature request New feature or request

Comments

@kay-schmitteckert
Copy link

kay-schmitteckert commented Sep 25, 2024

Describe the Problem

The Orchestration Client already simplifies the process of developing and kickstarting GenAI projects as well as communicating with foundation models. Now, LangChain support for OpenAI is available, and instead of writing a separate wrapper for each vendor, the idea is to proceed with a LangChain wrapper for the Orchestration Client, which would broadly cover everything.

Propose a Solution

LangChain wrapper for Orchestration client, e.g.:

import { LLM, type BaseLLMParams } from "@langchain/core/language_models/llms";
import type { CallbackManagerForLLMRun } from "@langchain/core/callbacks/manager";

import { OrchestrationClient, OrchestrationModuleConfig, ChatMessages } from "@sap-ai-sdk/orchestration";

export interface CustomLLMInput extends BaseLLMParams {
    deploymentId: string;
    resourceGroup?: string;
    modelName: string;
    modelParams?: {};
    modelVersion?: string;
}

export class GenerativeAIHubCompletion extends LLM {
    deploymentId: string;
    resourceGroup: string;
    modelName: string;
    modelParams: {};
    modelVersion: string;

    constructor(fields: CustomLLMInput) {
        super(fields);
        this.deploymentId = fields.deploymentId;
        this.resourceGroup = fields.resourceGroup || "default";
        this.modelName = fields.modelName;
        this.modelParams = fields.modelParams || {};
        this.modelVersion = fields.modelVersion || "latest";
    }

    _llmType() {
        return "Generative AI Hub - Orchestration Service";
    }

    async _call(
        prompt: string,
        options: this["ParsedCallOptions"],
        runManager: CallbackManagerForLLMRun
    ): Promise<string> {
       
        // Configuration & Prompt
        const llmConfig = {
            model_name: this.modelName,
            model_params: this.modelParams,
            model_version: this.modelVersion
        };

        const messages: ChatMessages = [{ role: "user", content: "{{?prompt}}?" }];
        const config: OrchestrationModuleConfig = {
            templating: {
                template: [{ role: "user", content: "{{?prompt}}" }]
            },
            llm: llmConfig
        };

        // Orchestration Client
        const orchestrationClient = new OrchestrationClient(config, {
            resourceGroup: this.resourceGroup
            //deploymentId: this.deploymentId
        });

        // Call the orchestration service.
        const response = await orchestrationClient.chatCompletion({
            inputParams: { prompt }
        });
        // Access the response content.
        return response.getContent();
    }
}

Describe Alternatives

No response

Affected Development Phase

Getting Started

Impact

Inconvenience

Timeline

No response

Additional Context

No response

@ZhongpinWang ZhongpinWang added the feature request New feature or request label Sep 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature request New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants