Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add chatml fallback for cpp llama_chat_apply_template #8160

Merged
merged 2 commits into from
Jun 27, 2024

Conversation

ngxson
Copy link
Collaborator

@ngxson ngxson commented Jun 27, 2024

Fix problem with DeepSeek V2 chat model: #8068 (comment) (cc @fairydreaming )

DeepSeek is no longer crash now, but it's using chatml (not ideal). We can add template for deepseek later.

Demo:

make llama-cli && ./llama-cli -m ../DeepSeek-Coder-V2-Lite-Instruct-Q2_K.gguf -cnv -p "You are Bob" -c 256

> hi
Hello! How can I assist you today?

> who are u
I am an AI assistant developed by DeepSeek Company. I can help you answer questions and provide information. How can I assist you today?

>

@mofosyne mofosyne added the Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix label Jun 27, 2024
Copy link
Collaborator

@fairydreaming fairydreaming left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DeepSeek-V2-Lite no longer crashes with these fixes, but I think there's one unnecessary llama_chat_apply_template() call that can be removed.

Comment on lines 2647 to 2650
if (fallback) {
res = llama_chat_apply_template(nullptr, "chatml", chat.data(), chat.size(), add_ass, buf.data(), buf.size());
}

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really necessary? I mean you already called llama_chat_apply_template(nullptr, "chatml", ...) in else above, so why do you call it again?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that's right, I forgot to delete the line inside else condition

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I deleted this line of code (keep the line in else branch). I've tested it once more time and confirmed that it's still working.

This will be merged once CI passed.

@ngxson ngxson added the merge ready indicates that this may be ready to merge soon and is just holding out in case of objections label Jun 27, 2024
@ngxson ngxson merged commit 16791b8 into ggerganov:master Jun 27, 2024
53 checks passed
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Jun 28, 2024
* add chatml fallback for cpp `llama_chat_apply_template`

* remove redundant code
Nexesenex pushed a commit to Nexesenex/croco.cpp that referenced this pull request Jun 28, 2024
* add chatml fallback for cpp `llama_chat_apply_template`

* remove redundant code
arthw pushed a commit to arthw/llama.cpp that referenced this pull request Jun 30, 2024
* add chatml fallback for cpp `llama_chat_apply_template`

* remove redundant code
MagnusS0 pushed a commit to MagnusS0/llama.cpp-normistral-tokenizer that referenced this pull request Jul 1, 2024
* add chatml fallback for cpp `llama_chat_apply_template`

* remove redundant code
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
merge ready indicates that this may be ready to merge soon and is just holding out in case of objections Review Complexity : Low Trivial changes to code that most beginner devs (or those who want a break) can tackle. e.g. UI fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Bug: llama-cli templating does buf.resize(-1) if the model's template is not supported, causing crash
3 participants