Fallback llms with AsyncCallbackHandler is not consistent with Regular LLM #26524
Replies: 1 comment
-
Hey @phenom-xplus! 👋 I'm here to help you with bugs, questions, and even becoming a contributor. Let's tackle this issue together while we wait for a human maintainer. The difference in the output format of your Here is the relevant code snippet: def _convert_dict_to_message(_dict: Mapping[str, Any]) -> BaseMessage:
role = _dict["role"]
if role == "user":
return HumanMessage(content=_dict["content"])
elif role == "assistant":
content = _dict.get("content", "") or ""
additional_kwargs = {}
if _dict.get("function_call"):
additional_kwargs["function_call"] = dict(_dict["function_call"])
if _dict.get("tool_calls"):
additional_kwargs["tool_calls"] = _dict["tool_calls"]
return AIMessage(content=content, additional_kwargs=additional_kwargs)
elif role == "system":
return SystemMessage(content=_dict["content"])
elif role == "function":
return FunctionMessage(content=_dict["content"], name=_dict["name"])
else:
return ChatMessage(content=_dict["content"], role=role) When the role is "assistant", the function checks for Additionally, the In the context of the |
Beta Was this translation helpful? Give feedback.
-
Checked other resources
Commit to Help
Example Code
Description
I'm trying to use the llm with fallback method to handle any unexpected fallbacks in my chain.
One issue that I've been seeing is that when I pass my callback handler, without the fallbacks, the output I receive to my
on_chain_end
method is a dict with myoutput_key
.However, when I use the fallback llm, the ouput is an
AIMessage
.Is this an issue which was fixed in later version of langchain? My current version of langchain stack is:
Please help! 🙏
System Info
langchain==0.0.351
langchain-community==0.0.6
langchain-core==0.1.3
langchain-experimental==0.0.42
Beta Was this translation helpful? Give feedback.
All reactions