You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
A clear and concise description of what the bug is.
I downloaded the LLama2-Chat-13B model locally and used the ChatLocalAI component and the Conversational Retrieval QA Chain component to ask questions about the large language model. I found that the LLama2 Promote has limitations.
2:17AM DBG GRPC(ggml-model-q4_0.gguf-127.0.0.1:34280): stderr llama_predict: error: prompt is too long (1125 tokens, max 508)
How to change the question parameter from Prompt to message?
To Reproduce
Steps to reproduce the behavior:
1.Install LocalAI
2.Download the LLama2-Chat-13B model
3.Convert LLama2-chat-13B model into ggml model q4_ 0. gguf
4.Installing FlowiesAI
5.Add ChatLocalAI component and the Conversational Retrieval QA Chain component to ask questions
6.FlowiesAI error
7. LocalAI error
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Flow
If applicable, add exported flow in order to help replicating the problem.
Setup
Linux localhost.localdomain 3.10.0-1160.99.1.el7.x86_64 mudler/LocalAI#1 SMP Wed Sep 13 14:19:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered:
Describe the bug
A clear and concise description of what the bug is.
I downloaded the LLama2-Chat-13B model locally and used the ChatLocalAI component and the Conversational Retrieval QA Chain component to ask questions about the large language model. I found that the LLama2 Promote has limitations.
2:17AM DBG GRPC(ggml-model-q4_0.gguf-127.0.0.1:34280): stderr llama_predict: error: prompt is too long (1125 tokens, max 508)
How to change the question parameter from Prompt to message?
To Reproduce
Steps to reproduce the behavior:
1.Install LocalAI
2.Download the LLama2-Chat-13B model
3.Convert LLama2-chat-13B model into ggml model q4_ 0. gguf
4.Installing FlowiesAI
5.Add ChatLocalAI component and the Conversational Retrieval QA Chain component to ask questions
6.FlowiesAI error
7. LocalAI error
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
Flow
If applicable, add exported flow in order to help replicating the problem.
Setup
Linux localhost.localdomain 3.10.0-1160.99.1.el7.x86_64 mudler/LocalAI#1 SMP Wed Sep 13 14:19:20 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
Additional context
Add any other context about the problem here.
The text was updated successfully, but these errors were encountered: