-
Notifications
You must be signed in to change notification settings - Fork 15.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CustomLLM cannot be used to build Chains
when using LLM
or Runnable
#15880
Comments
🤖 Hello @FaresKi! I'm Dosu, a friendly bot here to assist you with your issues and queries related to LangChain. While we wait for a human maintainer, feel free to ask me anything you'd like to know or need help with. I'm here to make your coding journey smoother! Thank you for providing detailed information about the issue you're facing. It seems like you're trying to create a custom LLM and use it with the The error you're encountering is due to the fact that the To resolve this issue, you can follow the example provided in the from langchain.llms import YourCustomLLM # replace with your custom LLM
from langchain.chains import LLMCheckerChain
from langchain_core.prompts import PromptTemplate
# Instantiate your custom LLM
llm = YourCustomLLM(temperature=0.7) # replace with your custom LLM and its parameters
# Define your custom prompts
create_draft_answer_prompt = PromptTemplate("Your prompt here")
list_assertions_prompt = PromptTemplate("Your prompt here")
check_assertions_prompt = PromptTemplate("Your prompt here")
revised_answer_prompt = PromptTemplate("Your prompt here")
# Instantiate the LLMCheckerChain with your custom LLM and prompts
checker_chain = LLMCheckerChain.from_llm(
llm,
create_draft_answer_prompt,
list_assertions_prompt,
check_assertions_prompt,
revised_answer_prompt
) In the above code, replace This approach ensures that you are using the correct parameters and methods to build your You can find more information about the I hope this helps! If you have any other questions or run into any issues, please let me know. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@dosu-bot Unfortunately when applying your changes, the same problem occurs. Python complains that the custom LLM isn't a def get_chain(prompt: PromptTemplate, callbacks: Callbacks = []) -> Chain:
"""
This function initializes and returns an LLMChain with a given prompt and callbacks.
Args:
prompt (str): The prompt to initialize the LLMChain with.
callbacks (Callbacks): Langchain callbacks fo
Returns:
Chain: An instance of LLMChain.
"""
llm = CustomLLM()
chain = LLMCheckerChain.from_llm(llm, prompt)
return chain I still get the same error. How would you design the |
🤖 I see that you're trying to create a custom LLM that extends from The Here is a basic example of how you can structure your from typing import Any, List, Optional, Type
from pydantic import BaseModel
from langchain_core.runnables.base import Runnable, RunnableConfig, ConfigurableFieldSpec
class CustomLLM(Runnable[Input, Output]):
def invoke(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Output:
# Implement your logic here
pass
async def ainvoke(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Output:
# Implement your logic here
pass
def batch(self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) -> List[Output]:
# Implement your logic here
pass
async def abatch(self, inputs: List[Input], config: Optional[Union[RunnableConfig, List[RunnableConfig]]] = None, *, return_exceptions: bool = False, **kwargs: Optional[Any]) -> List[Output]:
# Implement your logic here
pass
def stream(self, input: Input, config: Optional[RunnableConfig] = None, **kwargs: Optional[Any]) -> Iterator[Output]:
# Implement your logic here
pass
@property
def InputType(self) -> Any:
# Implement your logic here
pass
@property
def OutputType(self) -> Any:
# Implement your logic here
pass
@property
def config_specs(self) -> List[ConfigurableFieldSpec]:
# Implement your logic here
pass
def get_input_schema(self, config: Optional[RunnableConfig] = None) -> Type[BaseModel]:
# Implement your logic here
pass
def get_output_schema(self, config: Optional[RunnableConfig] = None) -> Type[BaseModel]:
# Implement your logic here
pass This is a basic structure. You need to replace the For more details, you can refer to the Runnable class in the LangChain codebase. SourcesThis response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research. Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant. If you want to continue the conversation, start your reply with @dosu-bot. |
@FaresKi Do you find any solution of this issue? I also have the same problem. Thank you! |
Hey @ssling0817 ! We ended up taking a different approach then the one provided in the documentation. from langchain.chat_models.base import SimpleChatModel
class CustomLLM(SimpleChatModel):
@property
def _llm_type(self) -> str:
return "<custom llm name>"
def _call(
self,
messages,
stop: List[str] | None = None,
run_manager: CallbackManager | None = None,
**kwargs: Any,
) -> str:
"""Call the API with the given prompt and return the result."""
content = requests.post() # handle your content generation process
return content It's now chainable & invokable. It uses the underlying class used by the OpenAI libs in Langchain. |
Checked other resources
Example Code
LLM
:Runnable
:Description
Hi!
I'm not exactly whether this is a bug, or an expected behavior.
I'm in a situation where I cannot use the LLM directly, and instead need to use APIs that interact with the LLM itself.
I've hence decided to create a CustomLLM using the documentation here to keep leveraging
Chain
features.Here are the problems I've been facing:
LLM
class as the Base class of myCustomLLM
class, I run into the following error:Runnable
(cf second code snippet in the example) but when running the new code I get this:System Info
langchain==0.0.329
langchain-core==0.1.9
Platform: MacOS 13.6.2
Python: 3.11
Related Components
The text was updated successfully, but these errors were encountered: