You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I felt like the chat bot was losing memory when I went over the context size. I done a search of n_keep in amica project files to specify the amount of initial token I want llama.cpp to keep. Then I could not find any mention of n_keep in the code.
I would like to make sure it keep all the initial prompt. I dont want the chat bot to lose track of who it is the middle of a conversation.
Until then I will try changing llama.cpp server default.
The text was updated successfully, but these errors were encountered:
Brief: Add advanced settings to include n_keep , context size, and also ability to save these in a user built list (e.g. for a particular model)
Please read carefully:
To begin work on a bounty, reply by saying “I claim this bounty” - you will have 48 hours to submit your PR before someone else may attempt to claim this bounty.
To complete the bounty, within 48 hours of claiming, reply with a link to your PR referencing this issue and an Ethereum address. You must comply with reviewers comments and have the PR merged to receive the bounty reward. Please be sure to focus on quality submissions to minimize the amount of time reviewers must take.
I felt like the chat bot was losing memory when I went over the context size. I done a search of n_keep in amica project files to specify the amount of initial token I want llama.cpp to keep. Then I could not find any mention of n_keep in the code.
I would like to make sure it keep all the initial prompt. I dont want the chat bot to lose track of who it is the middle of a conversation.
Until then I will try changing llama.cpp server default.
The text was updated successfully, but these errors were encountered: