-
Notifications
You must be signed in to change notification settings - Fork 15k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is there any way to combine chatbot and question answering over docs? #2185
Comments
I am working to make this two work together. I will update in case if it works. |
Here's basic plan fo you:
@punitvara, I have tested it a little bit with a proof of concept, and it works perfectly. |
No need to use memory @sergerdn ? We just need to use CONDENSE_PROMPT only ? I am trying above task with QA_with_source chain |
I didn't use any memory storage because I only created a proof of concept. The user's browser should send the chat history to the backend directly. When user reload their web page, all their chatting history is cleared. Maybe using 'CONDENSE_PROMPT' is not the best solution because I didn't realise that the chatting history is might be important at this stage. I used Pinecone exclusively for vector storage, but I didn't recommend it as a long-term storage solution for this use case. Rather, I used it solely for demonstration purposes. There are several ways to accomplish the same outcome. However, I may not be able to provide extensive consultation as my experience with OpenAI is limited. Please note that the libraries I'm using have several bugs, so be prepared to encounter them. |
I have been working on this as well. I have something which technically works but isn't very good right now. The agent gets kind of wacky, often retrieves good data but doesn't share it or summarizes it too much. In theory this approach should work and you should be able to add arbitrary tools (search etc.) I don't have a fully working code sample to share yet but essentially:
Code (doesn't compile but gives an idea)
So far I have found gpt3.5-turbo to be more effective than the default. Cheers and looking forward to other's contributing! This seems like a major use case. |
Take a look at the world of JavaScript. There are a few solutions available, but I believe they may not be ready to use in production yet, as they are still working as proof of concept. |
https://python.langchain.com/en/latest/modules/chains/index_examples/chat_vector_db.html Actually, I guess this is the answer that mix chat history and knowledge base. When did this come up? |
Perhaps not the most valuable input to this conversation, but For non-agent, non-server approach. I think the simple solution is:
That's how I'm dealing with this problem at least. I think the chat_vector_db approach that derekHsu posted is probably the closest thing to my simple use case. The problem with that for me is that the chat history is ALWAYS tied to the retrieval mechanism, whereas in my case, the chat history should ALWAYS be tied to the chat sequence portion of the application, and only tied to the retrieval portion when the vectordb is needed. Anyway, again, maybe not super relevant, but can't find where else to discuss this. |
Hi, @derekhsu! I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on my understanding, you were asking for advice on combining a chatbot and a question answering bot that retrieves information from documents based on embeddings. It looks like there have been some discussions and progress made by users "punitvara" and "sergerdn" in making these two bots work together. User "zakkl13" also shared their approach using a RetrievalQA chain and a ZeroShotAgent. Additionally, user "eddiesaltaccount" suggested a simple solution of using the chat history memory for the topic of interest when utilizing the vector DB for answering precise questions. Before we close this issue, we wanted to check if it is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself or it will be automatically closed in 7 days. Thank you for your understanding and contributions to the LangChain project! |
Thanks for your effort. I think this should be enough for my request. |
Hi, I read the docs and examples then tried to make a chatbot and a qustion answering bot over docs. I wonder is there any way to combine there two function together?
From my point of view, I mean basically it's chatbot which uses memory module to carry on conversation with users. If the user asking a question, then the chatbot rertives docs based on embeddings and get the answer.
Then I change the prompt of the conversation and add the answer to it, asking the chatbot response based on the memory and the answer. Will it work? Or there is another conventient way or chain to combine there two types of bots?
The text was updated successfully, but these errors were encountered: