Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Langchain Tools and Agents #7269

Closed
Ajaypawar02 opened this issue Jul 6, 2023 · 5 comments
Closed

Langchain Tools and Agents #7269

Ajaypawar02 opened this issue Jul 6, 2023 · 5 comments
Labels
🤖:improvement Medium size change to existing code to handle new use-cases 🤖:question A specific question about the codebase, product, project, or how to use a feature

Comments

@Ajaypawar02
Copy link

Ajaypawar02 commented Jul 6, 2023

Issue you'd like to raise.

I am using Tools and Agents to query on different vectorstores. But when I am asking question which is not from the vectorstore.It responds i dont know. So is there any approach i can try where if the answer is not from the vectorstore i can carry out the conversation like chatgpt. If Yes? Can you Please let me know how we can integrate this

Suggestion:

No response

@dosubot dosubot bot added 🤖:improvement Medium size change to existing code to handle new use-cases 🤖:question A specific question about the codebase, product, project, or how to use a feature labels Jul 6, 2023
@dosubot
Copy link

dosubot bot commented Jul 6, 2023

Answer generated by a 🤖

Answer

I found related issues that might be helpful. I did my best to summarize the solutions, but I recommend looking at the related issues yourself.

Related Issues

Closed Issues

Can I use vectorstore with LLMChain?

The suggested solution is:

retriever = vectorstore.as_retriever(search_kwargs=dict(k=1))
memory = VectorStoreRetrieverMemory(retriever=retriever)
LLMChain(llm=llm, prompt=prompt, verbose=True, memory=memory)

Open Issues

Issue: What if I want the langchain agent to answer an unseen type of question with its own knowledge from its pre-trained embedding?

The suggested solution is:

  1. Use a router chain (RC) which can dynamically select the next chain to use for a given input. There are two main types of router chains: LLM Router Chains and Embedding Router Chains.

  2. Change the llm_chain.prompt attribute of the agent with your own prompt. This minor change makes the agent aware of the knowledge given to it in addition to using the tool. The agent will use its prompt knowledge first and only use the tool when needed.

Is there any way to combine chatbot and question answering over docs?

The suggested solution is:

  1. The user submits a question to the Frontend client application.
  2. The question is sent to the Backend server over websockets.
  3. The Backend server normalizes the user's question and uses OpenAI's GPT model to generate a condensed version of the question using the LLMChain instance with the CONDENSE_PROMPT prompt.
  4. The server creates a Pinecone index to store embeddings of the text documents and retrieves the most similar documents to the user's condensed question, along with the condensed question itself and the chat history (if possible and available).
  5. If the retrieved documents are not satisfactory, the server sends the condensed question and chat history to the LLMChain instance to generate a better version of the user's question with the QA_PROMPT prompt.
  6. The server merges or concatenates multiple responses generated by the GPT model using techniques like summarization, fusion, or generation, and sends the response back to the Frontend client application over websockets for display to the user.

Another approach is to use the chat_vector_db approach as mentioned in the link: https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html. This approach mixes chat history and knowledge base.

For a non-agent, non-server approach, the solution is to have the chatbot form a chat history memory for whatever topic the user is interested in. When the time comes for utilizing the vector db to answer a precise question, extract the chat history memory and feed it into the qa retrieval in the form of a question + chat history context.

Agent answer questions that is not related to my custom data

No solution found.


This response is meant to be useful, save you time, and share context. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

@Kuramdasu-ujwala-devi
Copy link

I am using Tools and Agents to query on different vectorstores. But when I am asking question which is not from the vectorstore.It responds. So is there any approach i can try where if the answer is not from the vectorstore it should respond i didn't found the answer.

@Ajaypawar02
Copy link
Author

@khurramwbox
Copy link

I encountered a similar issue myself, but I discovered a solution. Just include this instruction in your system message prompt exactly as it is, and it should function very effectively.

system_message = SystemMessage( content=( "REMEMBER : If you're unable to find an answer using the retriever, utilize the OpenAI Language Model to generate a response. Avoid responses like 'I don't know'; instead, provide helpful answers." ) )

Copy link

dosubot bot commented Nov 24, 2023

Hi, @Ajaypawar02! I'm Dosu, and I'm here to help the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

From what I understand, the issue is about integrating chatGPT into Langchain Tools and Agents so that the conversation can continue even if a question is asked that is not from the vectorstore. It seems that the issue has been resolved by including a system message prompt, as suggested by khurramwbox.

Before we close this issue, we wanted to check with you if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on the issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days.

Thank you for your contribution to the LangChain repository!

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Nov 24, 2023
@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 1, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Dec 1, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🤖:improvement Medium size change to existing code to handle new use-cases 🤖:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

3 participants