Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enhancing Chatbot Memory: Recursive Summarization in Large Language Models #802

Open
1 task
ShellLM opened this issue Apr 12, 2024 · 1 comment
Open
1 task
Labels
ai-platform model hosts and APIs chat-templates llm prompt templates for chat models dataset public datasets and embeddings llm Large Language Models llm-experiments experiments with large language models New-Label Choose this option if the existing labels are insufficient to describe the content accurately

Comments

@ShellLM
Copy link
Collaborator

ShellLM commented Apr 12, 2024

Enhancing Chatbot Memory: Recursive Summarization in Large Language Models

Snippet

"Recursively Summarizing Enables Long-Term Dialogue Memory in Large Language Models

Qingyue Wang, Liang Ding, Yanan Cao, Zhiliang Tian, Shi Wang, Dacheng Tao, Li Guo Recently, large language models (LLMs), such as GPT-4, stand out remarkable conversational abilities, enabling them to engage in dynamic and contextually relevant dialogues across a wide range of topics. However, given a long conversation, these chatbots fail to recall past information and tend to generate inconsistent responses. To address this, we propose to recursively generate summaries/ memory using large language models (LLMs) to enhance long-term memory ability. Specifically, our method first stimulates LLMs to memorize small dialogue contexts and then recursively produce new memory using previous memory and following contexts. Finally, the chatbot can easily generate a highly consistent response with the help of the latest memory. We evaluate our method on both open and closed LLMs, and the experiments on the widely-used public dataset show that our method can generate more consistent responses in a long-context conversation. Also, we show that our strategy could nicely complement both long-context (e.g., 8K and 16K) and retrieval-enhanced LLMs, bringing further long-term dialogue performance. Notably, our method is a potential solution to enable the LLM to model the extremely long context. The code and scripts will be released later."

Read the full paper here

Suggested labels

{'label-name': 'long-term-memory', 'label-description': 'Enhancing dialogue memory in large language models for long-context conversations.', 'confidence': 57.43}

@ShellLM ShellLM added ai-platform model hosts and APIs chat-templates llm prompt templates for chat models dataset public datasets and embeddings llm Large Language Models llm-experiments experiments with large language models New-Label Choose this option if the existing labels are insufficient to describe the content accurately labels Apr 12, 2024
@ShellLM
Copy link
Collaborator Author

ShellLM commented Apr 12, 2024

Related content

#317 similarity score: 0.9
#332 similarity score: 0.9
#706 similarity score: 0.86
#459 similarity score: 0.86
#681 similarity score: 0.86
#363 similarity score: 0.86

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ai-platform model hosts and APIs chat-templates llm prompt templates for chat models dataset public datasets and embeddings llm Large Language Models llm-experiments experiments with large language models New-Label Choose this option if the existing labels are insufficient to describe the content accurately
Projects
None yet
Development

No branches or pull requests

1 participant