-
Notifications
You must be signed in to change notification settings - Fork 9.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: An alternative to chat templates #6726
Comments
@ngxson, what do you think about the proposal, please? |
Sounds cool and i'd say take it further, why even template or search&replace within a role? Just change it to "prefix" and "suffix": // Blind code, probably wrong but that's the idea
// Each role just have a prefix & suffix.
std::unordered_map<std::string, std::pair<std::string, std::string>> chatML = {
{"system", {"<|im_start|>system\n", "<|im_end|>\n"}},
{"user", {"<|im_start|>user\n", "<|im_end|>\n"}},
{"assistant", {"<|im_start|>assistant\n", "<|im_end|>\n"}},
}; You can even pre-tokenize the prefix/suffix too. And to round it off, add a config for "stop token(s)" too because llama 3 is using eot_id which throws off all the default configs. Something like: struct ChatTemplate {
std::string start_of_conversation; // Because bos is a thing
std::unordered_map<std::string, std::pair<std::string, std::string>> roles;
std::vector<std::string> stop_tokens;
}
// std::pair<std::string, std::string> should be more like `RoleConfig` with:
struct RoleConfig {
std::string prefix;
std::string suffix;
// Maybe more config in the future like:
bool is_machine_generated; //
}; Llama-3 expressed in yaml would be: start_of_conversation: "<|begin_of_text|>"
roles:
system:
prefix: |
<|start_header_id|>system<|end_header_id|>
suffix: "<|eot_id|>"
user:
prefix: |
<|start_header_id|>user<|end_header_id|>
suffix: "<|eot_id|>"
assistant:
prefix: |
<|start_header_id|>assistant<|end_header_id|>
suffix: "<|eot_id|>"
stop_tokens:
- "<|eot_id|>" The double line break is intentional.
That said, once it's that embedded in the format, should the prefix/suffix just be pretokenized instead? GGUF does have nested array for metadata even. Although I do recall some models that ends with: Explicitly listing the roles is even better than huggingface. Again, I think it's a great idea. Edit edit: You can even create a "auto convert" script that "works most of the time" with arbitrary templates. |
The proposal here is pretty much the same as #5922 , so I suggest moving the discussion there. The main problem is that even with this level of flexibility, some templates can't be supported without doing some code logic (for example, llama 2 template |
Please do have a look at the below PR. Around the time when llama3 came out, I had a need to look at llama.cpp and inturn I worked on below, to try and see if one can have a generic flow which is driven by a config file to try and accomodate different modes/chat-handshake-template-standards in a generic and flexible way. The idea being that if a new template standard is added during finetuning of a model or if a new model or standard comes out, but which follows a sane convention matching the commonality that I have noticed across many models/standards, then the generic code flow itself can be used, by just updating the config file, without having to add a custom template block. This inturn can be used by both the example/main as well as example/server or ... Currently main has been patched to use this config file based flow inturn piggy backing on its existing interactive mode and its in-prefix, in-suffix, antiprompt to a great extent. Based on some minimal testing at my end, I seem to be able to handle the nitty gritties of around 8(+1) model using this generic code + config file based flow. Currently json format is used for the config file, but if needed can be switched to a simpler text based config file, to avoid users of the llama.cpp library from needing to depend on json library. The generic flow uses concept similar to what this PR is also thinking, but inturn driven by a config file, rather than hardcoding in the code, so that new model or variations can be added without having to recompile in many cases. And also the generic flow additionally takes care of
UPDATE: I noticed that this is closed and refering to 5922, so I have added a equivalent note there. |
Prerequisites
Please answer the following questions for yourself before submitting an issue.
Feature Description
Jinja template support has already been discussed extensively, and I'd place the main tension between:
I'm opening this issue to propose an alternative that potentially satisfies both. As a placeholder, let's call it role templates instead of chat templates:
Just loop through the messages, get the corresponding role, and find-replace
{{content}}
. Andadd_generation_prompt
is just the substring in front of the next message's{{content}}
.This format itself could be anything — JSON, YAML, key-value pairs — making it easy to adopt in non-llama.cpp contexts as well.
Motivation
For llama.cpp maintainers / model authors:
For end users:
For client apps / front ends:
It's a viable alternative to the current state, where every chat client maintains its own library of chat templates, while using llama.cpp's completion API. The fact that llama.cpp doesn't support all templates, means that every downstream chat client still needs to reinvent the wheel.
For open models, in general:
Personally, my experience adding chat templates opened my eyes to just how messy the template landscape is right now. Open models don't just lag in scale, but also have to deal with compatibility and usability issues that the closed models can sidestep.
Chat templates feel like an important thing to get right, and I think llama.cpp can greatly simplify this for the many projects that depend on it.
Possible Implementation
tests/test-chat-template
.llama_chat_apply_template_internal
could be refactored to use role templates under-the-hood so that the existing--chat-template
flag still works.Happy to submit a PR or collaborate if this is a direction folks are interested in.
The text was updated successfully, but these errors were encountered: