You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I added a very descriptive title to this question.
I searched the LangChain documentation with the integrated search.
I used the GitHub search to find a similar question and didn't find it.
Commit to Help
I commit to help with one of those options 👆
Example Code
fromlangchainimporthubfromlangchain.agentsimportAgentExecutor, create_react_agentfromlangchain_community.tools.tavily_searchimportTavilySearchResultsfromclients.llm_clientimportLLMClientimportosimporttimeimportasyncioos.environ["TAVILY_API_KEY"] ="YOUR_API_KEY"tools= [TavilySearchResults(max_results=1)]
llm=LLMClient().get_genai_chat_v2()
prompt=hub.pull("hwchase17/react")
agent=create_react_agent(llm, tools, prompt)
agent_executor=AgentExecutor(agent=agent, tools=tools, verbose=True)
user_input="what is the height of the eiffel tower?"defcall_agent_sync():
start=time.time()
response=agent_executor.invoke({"input": user_input})
end=time.time()
print(f"Sync call took {end-start} seconds")
asyncdefcall_agent_async():
start=time.time()
response=awaitagent_executor.ainvoke({"input": user_input})
end=time.time()
print(f"Async call took {end-start} seconds")
# call_agent_sync()asyncio.run(call_agent_async())
Description
The sync call is taking about 12s while the async call is taking a whopping 52s. I understand that async implementations have an overhead but is the overhead really that huge?
Note: we are using llm_client which is a wrapper around OpenAI
System Info
System Information
OS: Darwin
OS Version: Darwin Kernel Version 23.6.0: Wed Jul 31 20:48:44 PDT 2024; root:xnu-10063.141.1.700.5~1/RELEASE_X86_64
Python Version: 3.11.9 (main, Apr 2 2024, 08:25:04) [Clang 15.0.0 (clang-1500.3.9.4)]
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Checked other resources
Commit to Help
Example Code
Description
The sync call is taking about 12s while the async call is taking a whopping 52s. I understand that async implementations have an overhead but is the overhead really that huge?
Note: we are using llm_client which is a wrapper around OpenAI
System Info
System Information
Package Information
Packages not installed (Not Necessarily a Problem)
The following packages were not found:
Beta Was this translation helpful? Give feedback.
All reactions