Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AgentExecutor cannot be used #26760

Open
5 tasks done
lm970585581 opened this issue Sep 23, 2024 · 1 comment
Open
5 tasks done

AgentExecutor cannot be used #26760

lm970585581 opened this issue Sep 23, 2024 · 1 comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core investigate

Comments

@lm970585581
Copy link

lm970585581 commented Sep 23, 2024

Checked other resources

  • I added a very descriptive title to this issue.
  • I searched the LangChain documentation with the integrated search.
  • I used the GitHub search to find a similar question and didn't find it.
  • I am sure that this is a bug in LangChain rather than my code.
  • The bug is not resolved by updating to the latest stable version of LangChain (or the specific integration package).

Example Code

The following code is copied from the example provided by langchain to test this problem.
ChatOpenAI uses the local environment deployed by xinference

    tools = [add, multiply]
    llm = ChatOpenAI(model=LLM_MODELS, openai_api_key="111",
                     openai_api_base="http://192.168.0.84:9997/v1", )
    query = "What is 3 * 12?"
    prompt = hub.pull("hwchase17/openai-functions-agent")
    agent = create_tool_calling_agent(llm, tools, prompt)
    agent_executor = AgentExecutor(agent=agent, tools=tools)
    print(agent_executor.invoke({"input": query}))

Error Message and Stack Trace (if applicable)

Description

Executing the code provided above will result in an error:

Traceback (most recent call last):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\tests\test1.py", line 47, in
print(agent_executor.invoke({"input": query}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\chains\base.py", line 170, in invoke
raise e
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\chains\base.py", line 160, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1629, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1335, in _take_next_step
[
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1335, in
[
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1363, in _iter_next_step
output = self._action_agent.plan(
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3405, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3392, in transform
yield from self._transform_stream_with_config(
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 2193, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3355, in _transform
yield from final_pipeline
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 1409, in transform
for ichunk in input:
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 5550, in transform
yield from self.bound.transform(
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 1427, in transform
yield from self.stream(final, config, **kwargs)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\language_models\chat_models.py", line 418, in stream
raise e
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\language_models\chat_models.py", line 398, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_openai\chat_models\base.py", line 626, in _stream
chunk = chunk.model_dump()
^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\pydantic\main.py", line 390, in model_dump
return self.pydantic_serializer.to_python(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: 'MockValSer' object cannot be converted to 'SchemaSerializer'

This error was solved by setting the environment variable DEFER_PYDANTIC_BUILD=0.
However, when I executed it again, a new error occurred. I don't know how to solve it. I suspect that there is a compatibility issue with the openai interface of langchain. The error message is as follows:

D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\pydantic\main.py:390: UserWarning: Pydantic serializer warnings:
Expected ChoiceDeltaToolCall but got list with value [ChoiceDeltaToolCall(inde...ply'), type='function')] - serialized value may not be as expected
return self.pydantic_serializer.to_python(
Traceback (most recent call last):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\tests\test1.py", line 47, in
print(agent_executor.invoke({"input": query}))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\chains\base.py", line 170, in invoke
raise e
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\chains\base.py", line 160, in invoke
self._call(inputs, run_manager=run_manager)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1629, in _call
next_step_output = self._take_next_step(
^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1335, in _take_next_step
[
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1335, in
[
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 1363, in _iter_next_step
output = self._action_agent.plan(
^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain\agents\agent.py", line 580, in plan
for chunk in self.runnable.stream(inputs, config={"callbacks": callbacks}):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3405, in stream
yield from self.transform(iter([input]), config, **kwargs)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3392, in transform
yield from self._transform_stream_with_config(
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 2193, in _transform_stream_with_config
chunk: Output = context.run(next, iterator) # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 3355, in _transform
yield from final_pipeline
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 1409, in transform
for ichunk in input:
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 5550, in transform
yield from self.bound.transform(
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\runnables\base.py", line 1427, in transform
yield from self.stream(final, config, **kwargs)
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\language_models\chat_models.py", line 418, in stream
raise e
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_core\language_models\chat_models.py", line 398, in stream
for chunk in self._stream(messages, stop=stop, **kwargs):
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_openai\chat_models\base.py", line 627, in _stream
generation_chunk = _convert_chunk_to_generation_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_openai\chat_models\base.py", line 309, in _convert_chunk_to_generation_chunk
message_chunk = _convert_delta_to_message_chunk(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_openai\chat_models\base.py", line 248, in _convert_delta_to_message_chunk
tool_call_chunks = [
^
File "D:\workSpace\algorithm-platform\project\gdsf\llm\venv_test\Lib\site-packages\langchain_openai\chat_models\base.py", line 250, in
name=rtc["function"].get("name"),
~~~^^^^^^^^^^^^
TypeError: list indices must be integers or slices, not str

System Info

System Information

OS: Windows
OS Version: 10.0.19045
Python Version: 3.11.7 | packaged by Anaconda, Inc. | (main, Dec 15 2023, 18:05:47) [MSC v.1916 64 bit (AMD64)]

Package Information

langchain_core: 0.3.5
langchain: 0.3.0
langsmith: 0.1.125
langchain_openai: 0.2.0
langchain_text_splitters: 0.3.0
langgraph: 0.2.23

Optional packages not installed

langserve

Other Dependencies

aiohttp: 3.10.5
async-timeout: Installed. No version info available.
httpx: 0.27.2
jsonpatch: 1.33
langgraph-checkpoint: 1.0.10
numpy: 1.26.4
openai: 1.47.0
orjson: 3.10.7
packaging: 24.1
pydantic: 2.9.2
PyYAML: 6.0.2
requests: 2.32.3
SQLAlchemy: 2.0.35
tenacity: 8.5.0
tiktoken: 0.7.0
typing-extensions: 4.12.2

@langcarl langcarl bot added the investigate label Sep 23, 2024
@dosubot dosubot bot added Ɑ: core Related to langchain-core 🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature labels Sep 23, 2024
@lm970585581
Copy link
Author

lm970585581 commented Sep 23, 2024

I am using the locally deployed xinference, and the large model I am running is Qwen1.5

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:bug Related to a bug, vulnerability, unexpected error with an existing feature Ɑ: core Related to langchain-core investigate
Projects
None yet
Development

No branches or pull requests

1 participant