Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CustomLLM cannot be used to build Chains when using LLM or Runnable #15880

Closed
6 of 15 tasks
FaresKi opened this issue Jan 11, 2024 · 5 comments
Closed
6 of 15 tasks

CustomLLM cannot be used to build Chains when using LLM or Runnable #15880

FaresKi opened this issue Jan 11, 2024 · 5 comments
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules

Comments

@FaresKi
Copy link

FaresKi commented Jan 11, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.

Example Code

  • With LLM:
import os
from typing import Any, List

import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain_core.language_models.llms import LLM

required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
    if env not in os.environ:
        raise ValueError(f"Missing required environment variable: {env}")


class CustomLLM(LLM):
    @property
    def _llm_type(self) -> str:
        return "CustomLLM"

    def _call(
        self,
        prompt: str,
        stop: List[str] | None = None,
        run_manager: CallbackManagerForLLMRun | None = None,
        **kwargs: Any
    ) -> str:
        """Call the API with the given prompt and return the result."""
        self._api_endpoint: str = str(os.getenv("API_BASE"))
        self._api_key: str = str(os.getenv("API_KEY"))
        self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))

        result = requests.post(
            f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
            headers={
                "Content-Type": "application/json",
                "api-key": self._api_key,
            },
            json={
                "messages": prompt,
                "temperature": 0,
                "top_p": 0,
                "model": "gpt-4-32k",
            },
        )

        if result.status_code != 200:
            raise RuntimeError(
                f"Failed to call API: {result.status_code} {result.content}"
            )
        else:
            return result.json()["choices"][0]["message"]


def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
    """
    This function initializes and returns an LLMChain with a given prompt and callbacks.

    Args:
        prompt (str): The prompt to initialize the LLMChain with.
        callbacks (Callbacks): Langchain callbacks fo

    Returns:
        Chain: An instance of LLMChain.
    """
    llm = CustomLLM()
    chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
    return chain


if __name__ == "__main__":
    prompt_template = """
    You are an insurance agent. You are provided with instructions, and you must provide an answer.
    Question: {question}
    """
    prompt = PromptTemplate(
        template=prompt_template,
        input_variables=["question"],
    )
    chain = get_chain(prompt)

    result = chain.invoke({"question": "What is the best insurance policy for me?"})
    print(result)
  • With Runnable:
import os
from typing import Any, List

import requests
from langchain.callbacks.base import Callbacks
from langchain.chains import LLMChain
from langchain.chains.base import Chain
from langchain.prompts import PromptTemplate
from langchain_core.callbacks.manager import CallbackManagerForLLMRun
from langchain.schema.runnable import Runnable
from langchain.schema.language_model import LanguageModelInput


required_envs = ["API_BASE", "API_KEY", "DEPLOYMENT_NAME"]
for env in required_envs:
    if env not in os.environ:
        raise ValueError(f"Missing required environment variable: {env}")


class CustomLLM(LLM):
    @property
    def _llm_type(self) -> str:
        return "CustomLLM"

    def invoke(
        self, input: LanguageModelInput, config: RunnableConfig | None = None
    ) -> str:
        return super().invoke(input)

    def _call(
        self,
        prompt: str,
        run_manager: Optional[CallbackManagerForLLMRun] = None,
    ) -> str:
        """Call the API with the given prompt and return the result."""
        self._api_endpoint: str = str(os.getenv("OPENAI_API_BASE"))
        self._api_key: str = str(os.getenv("OPENAI_API_BASE"))
        self._deployment_name: str = str(os.getenv("DEPLOYMENT_NAME"))

        result = requests.post(
            f"{self._api_endpoint}/llm/deployments/{self._deployment_name}/chat/completions?api-version=2023-05-15",
            headers={
                "Content-Type": "application/json",
                "api-key": self._api_key,
            },
            json={
                "messages": prompt,
                "temperature": 0,
                "top_p": 0,
                "model": "gpt-4-32k",
            },
        )

        if result.status_code != 200:
            raise RuntimeError(
                f"Failed to call API: {result.status_code} {result.content}"
            )
        else:
            return result.json()["choices"][0]["message"]


def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
    """
    This function initializes and returns an LLMChain with a given prompt and callbacks.

    Args:
        prompt (str): The prompt to initialize the LLMChain with.
        callbacks (Callbacks): Langchain callbacks fo

    Returns:
        Chain: An instance of LLMChain.
    """
    llm = CustomLLM()
    chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
    return chain


if __name__ == "__main__":
    prompt_template = """
    You are an insurance agent. You are provided with instructions, and you must provide an answer.
    Question: {question}
    """
    prompt = PromptTemplate(
        template=prompt_template,
        input_variables=["question"],
    )
    chain = get_chain(prompt)

    result = chain.invoke({"question": "What is the best insurance policy for me?"})
    print(result)

Description

Hi!
I'm not exactly whether this is a bug, or an expected behavior.

I'm in a situation where I cannot use the LLM directly, and instead need to use APIs that interact with the LLM itself.
I've hence decided to create a CustomLLM using the documentation here to keep leveraging Chain features.
Here are the problems I've been facing:

  • When using the LLM class as the Base class of my CustomLLM class, I run into the following error:
Traceback (most recent call last):
  File "custom_llm.py", line 83, in <module>
    chain = get_chain(prompt)
            ^^^^^^^^^^^^^^^^^
  File "custom_llm.py", line 70, in get_chain
    chain = LLMChain(llm=llm, prompt=prompt, callbacks=callbacks)
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/load/serializable.py", line 97, in __init__
    super().__init__(**kwargs)
  File ".venv/lib/python3.11/site-packages/pydantic/v1/main.py", line 341, in __init__
    raise validation_error
pydantic.v1.error_wrappers.ValidationError: 2 validation errors for LLMChain
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
llm
  instance of Runnable expected (type=type_error.arbitrary_type; expected_arbitrary_type=Runnable)
  • Following this error, I've decided to modify the class, so it extends from Runnable (cf second code snippet in the example) but when running the new code I get this:
Traceback (most recent call last):
  File "utils/custom_llm.py", line 90, in <module>
    result = chain.invoke({"question": "What is the best insurance policy for me?"})
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 87, in invoke
    return self(
           ^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 310, in __call__
    raise e
  File ".venv/lib/python3.11/site-packages/langchain/chains/base.py", line 304, in __call__
    self._call(inputs, run_manager=run_manager)
  File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 108, in _call
    response = self.generate([inputs], run_manager=run_manager)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/chains/llm.py", line 127, in generate
    results = self.llm.bind(stop=stop, **self.llm_kwargs).batch(
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 2658, in batch
    return self.bound.batch(
           ^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 321, in batch
    return cast(List[Output], [invoke(inputs[0], configs[0])])
                               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File ".venv/lib/python3.11/site-packages/langchain/schema/runnable/base.py", line 317, in invoke
    return self.invoke(input, config, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: CustomLLM.invoke() got an unexpected keyword argument 'stop'

System Info

langchain==0.0.329
langchain-core==0.1.9

Platform: MacOS 13.6.2
Python: 3.11

Related Components

  • LLMs/Chat Models
  • Embedding Models
  • Prompts / Prompt Templates / Prompt Selectors
  • Output Parsers
  • Document Loaders
  • Vector Stores / Retrievers
  • Memory
  • Agents / Agent Executors
  • Tools / Toolkits
  • Chains
  • Callbacks/Tracing
  • Async
Copy link

dosubot bot commented Jan 11, 2024

🤖

Hello @FaresKi! I'm Dosu, a friendly bot here to assist you with your issues and queries related to LangChain. While we wait for a human maintainer, feel free to ask me anything you'd like to know or need help with. I'm here to make your coding journey smoother!

Thank you for providing detailed information about the issue you're facing. It seems like you're trying to create a custom LLM and use it with the LLMChain class.

The error you're encountering is due to the fact that the LLMChain class expects an instance of Runnable as its llm attribute, but your CustomLLM class is not a subclass of Runnable.

To resolve this issue, you can follow the example provided in the LLMCheckerChain class. Here's how you can do it:

from langchain.llms import YourCustomLLM  # replace with your custom LLM
from langchain.chains import LLMCheckerChain
from langchain_core.prompts import PromptTemplate

# Instantiate your custom LLM
llm = YourCustomLLM(temperature=0.7)  # replace with your custom LLM and its parameters

# Define your custom prompts
create_draft_answer_prompt = PromptTemplate("Your prompt here")
list_assertions_prompt = PromptTemplate("Your prompt here")
check_assertions_prompt = PromptTemplate("Your prompt here")
revised_answer_prompt = PromptTemplate("Your prompt here")

# Instantiate the LLMCheckerChain with your custom LLM and prompts
checker_chain = LLMCheckerChain.from_llm(
    llm,
    create_draft_answer_prompt,
    list_assertions_prompt,
    check_assertions_prompt,
    revised_answer_prompt
)

In the above code, replace YourCustomLLM with your custom LLM class, and replace "Your prompt here" with your custom prompts. The from_llm class method of LLMCheckerChain is used to create an instance of LLMCheckerChain using your custom LLM and prompts. This method internally calls the _load_question_to_checked_assertions_chain function to create a SequentialChain of LLMChain instances using your custom LLM and prompts. This SequentialChain is then used to instantiate the LLMCheckerChain.

This approach ensures that you are using the correct parameters and methods to build your Chains and should prevent any validation errors or unexpected keyword argument errors.

You can find more information about the LLMCheckerChain class in the LangChain codebase.

I hope this helps! If you have any other questions or run into any issues, please let me know.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added Ɑ: models Related to LLMs or chat model modules 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Jan 11, 2024
@FaresKi
Copy link
Author

FaresKi commented Jan 11, 2024

@dosu-bot Unfortunately when applying your changes, the same problem occurs. Python complains that the custom LLM isn't a Runnable and

def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
    """
    This function initializes and returns an LLMChain with a given prompt and callbacks.

    Args:
        prompt (str): The prompt to initialize the LLMChain with.
        callbacks (Callbacks): Langchain callbacks fo

    Returns:
        Chain: An instance of LLMChain.
    """
    llm = CustomLLM()
    chain = LLMCheckerChain.from_llm(llm, prompt)
    return chain

I still get the same error.

How would you design the YourCustomLLM class?

Copy link

dosubot bot commented Jan 11, 2024

🤖

I see that you're trying to create a custom LLM that extends from Runnable. The error you're encountering is due to the fact that your CustomLLM class doesn't fully implement the Runnable interface.

The Runnable class in LangChain is an abstract base class that provides a common interface for its subclasses. This interface includes methods like invoke, ainvoke, batch, abatch, stream, and properties like InputType, OutputType, config_specs, get_input_schema, get_output_schema. These methods and properties need to be implemented in your CustomLLM class.

Here is a basic example of how you can structure your CustomLLM class:

from typing import Any, List, Optional, Type
from pydantic import BaseModel
from langchain_core.runnables.base import Runnable, RunnableConfig, ConfigurableFieldSpec

class CustomLLM(Runnable[Input, Output]):
    def invoke(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Output:
        # Implement your logic here
        pass

    async def ainvoke(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Output:
        # Implement your logic here
        pass

    def batch(self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) -> List[Output]:
        # Implement your logic here
        pass

    async def abatch(self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) -> List[Output]:
        # Implement your logic here
        pass

    def stream(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Iterator[Output]:
        # Implement your logic here
        pass

    @property
    def InputType(self) -> Any:
        # Implement your logic here
        pass

    @property
    def OutputType(self) -> Any:
        # Implement your logic here
        pass

    @property
    def config_specs(self) -> List[ConfigurableFieldSpec]:
        # Implement your logic here
        pass

    def get_input_schema(self, config: Optional[RunnableConfig] = None) -> Type[BaseModel]:
        # Implement your logic here
        pass

    def get_output_schema(self, config: Optional[RunnableConfig] = None) -> Type[BaseModel]:
        # Implement your logic here
        pass

This is a basic structure. You need to replace the # Implement your logic here comments with your actual implementation.

For more details, you can refer to the Runnable class in the LangChain codebase.

Sources


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 11, 2024
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Apr 18, 2024
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Apr 18, 2024
@ssling0817
Copy link

@FaresKi Do you find any solution of this issue? I also have the same problem. Thank you!

@FaresKi
Copy link
Author

FaresKi commented Jun 5, 2024

Hey @ssling0817 !

We ended up taking a different approach then the one provided in the documentation.
We've decided to build a custom class inheriting from the SimpleChatModel class:

from langchain.chat_models.base import SimpleChatModel
class CustomLLM(SimpleChatModel):
    @property
    def _llm_type(self) -> str:
        return "<custom llm name>"

    def _call(
        self,
        messages,
        stop: List[str] | None = None,
        run_manager: CallbackManager | None = None,
        **kwargs: Any,
    ) -> str:
        """Call the API with the given prompt and return the result."""
            content = requests.post() # handle your content generation process
            return content

It's now chainable & invokable. It uses the underlying class used by the OpenAI libs in Langchain.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: models Related to LLMs or chat model modules
Projects
None yet
Development

No branches or pull requests

2 participants