Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding memory to the prompt node solely #5706

Closed
HGamalElDin opened this issue Sep 2, 2023 · 4 comments
Closed

Adding memory to the prompt node solely #5706

HGamalElDin opened this issue Sep 2, 2023 · 4 comments

Comments

@HGamalElDin
Copy link

I can see the agent is making a lot of functions easier to create an extensive application with many tools and integrations. However, I'm working on a simple application that needs exactly one prompt node but with memory. Is there any workaround to do so in the meantime?

Actually, adding memory to the prompt node only already exists in many other tools, here we only have to use agents.

@julian-risch
Copy link
Member

Hi @HGamalElDin For use cases like this we have the ConversationalAgent class, which is a lightweight Agent. You could say it's basically a PromptNode with memory and you don't need to provide tools. You can read more about it here: https://haystack.deepset.ai/blog/memory-conversational-agents

Or just try out the following code:

from haystack.nodes import PromptNode  
from haystack.agents.conversational import ConversationalAgent    

prompt_node = PromptNode('gpt-3.5-turbo', api_key=MY_API_KEY, max_length=256)  
conversational_agent = ConversationalAgent(prompt_node=prompt_node)
conversational_agent.run("your query here")  

@ryanholtschneider2
Copy link

What about if we need a pipeline that uses memory as a variable in the prompt template to fill out?

Ie. I want to do somethings to get documents, then pass memory as well.

Also how can we create arbitrary variables that can be filled in as part of a prompt node? not just the defaults like documents, memory, query, etc..?

@HGamalElDin
Copy link
Author

HGamalElDin commented Sep 10, 2023

Hi @HGamalElDin For use cases like this we have the ConversationalAgent class, which is a lightweight Agent. You could say it's basically a PromptNode with memory and you don't need to provide tools. You can read more about it here: https://haystack.deepset.ai/blog/memory-conversational-agents

Or just try out the following code:

from haystack.nodes import PromptNode  
from haystack.agents.conversational import ConversationalAgent    

prompt_node = PromptNode('gpt-3.5-turbo', api_key=MY_API_KEY, max_length=256)  
conversational_agent = ConversationalAgent(prompt_node=prompt_node)
conversational_agent.run("your query here")  

Thank you for getting back to me so quickly. However, I already created a pipeline that has a file converter custom node, Retriever, and Prompt Node that takes query, memory, and documents as inputs) but I'm encountering unexpected errors from the agent around the needed variables in the prompt (documents, memory)

Would you please @julian-risch showcase how to implement this case using conversational agents, as the documentation lacks a little clarity?

@HGamalElDin
Copy link
Author

HGamalElDin commented Sep 24, 2023

Please confirm if I can not implement the RAG pipeline with a chat memory (without using agents)?
Notes that might give you intuition on my case:

  1. I have a custom prompt template as follows:
[INS]<<SYS>>

You are the *** assistant bot. You are representing a bot answering ***'s employees' frequently asked questions from the context of ***'s handbooks.
Derive the concise and exact answer for the human's question concisely from the given context. You must answer the question exactly without including any redundant additional information.
Rely exclusively on the provided context. Don't include any additional text out of the question's scope.
Maintain an impartial and journalistic tone. Avoid repetition or redundant information. Do not mention anything regarding the context or *** handbooks unless it's requested by the human.
If you can not find the answer within the provided context, in this case just say that ```**** is still in learning mode. You can switch to ChatGPT mode for general knowledge and public information.```
If a summary of the context is requested by the human only, summarize the context without omitting important information in a structured answer.
Never include any emojis in your responses.
Your answer must be structured as better as it can look. 
If the Human greeted you, greet him back in a journalistic way.
<</SYS>>


Context: {documents};
Human Question: {input};[/INS]
According to the provided context, Your Answer:

So here I have two variables.
Question 1: how to provide the documents variable to the agent you created in your comment?

I tried the following snippet, however, the default memory injected in the agent doesn't store the Human input at all.

from haystack.nodes import PromptNode  
  
model_name = 'gpt-3.5-turbo'  
prompt_node = PromptNode(model_name, api_key="my_key", default_prompt_template=r"QA-Prompt.yaml", max_length=256)

from haystack.agents.conversational import ConversationalAgent  
conversational_agent = ConversationalAgent(prompt_node=prompt_node, memory=summary_memory)

conversational_agent.run({"input": "what is france's captial?", "documents":"""Working Days: Sunday to Thursday
Working Hours: 9:00 AM to 5:00 PM

Working Days (Ramadan): Sunday to Thursday
Working Hours (Ramadan): 10:30 AM to 4:00 PM
"""})

When I call
conversational_agent.memory.load()

I got """Human: input\nAI: Paris is the capital of France.\nHuman: input\nAI: During Ramadan, working hours are 10:30 AM to 4:00 PM, Sunday to Thursday.\nHuman: input\nAI: France's capital is Paris.\n"""

Also, the prompt template I provided should control the responses that no random answers should be generated! only from the context or the response should be a specific message as provided! it works well if I only use the prompt node, but with agent It's like the prompt template doesn't work, what could be the possible cause of this?

I need help please! @ZanSara @julian-risch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants