Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add chatglm3 conv template support in conversation.py #2622

Merged
merged 7 commits into from
Nov 10, 2023

Conversation

ZeyuTeng96
Copy link
Contributor

Hi there,

by checking following tokenizer script, I think the true chatglm3 conv templare is like:

"<|system|>\nYou are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|>\nHello!<|assistant|>\nHi!<|user|>\nHow are you?<|assistant|>"

please see following code:
https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L179

https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L194

Why are these changes needed?

Related issue number (if applicable)

Checks

  • I've run format.sh to lint the changes in this PR.
  • I've included any doc changes needed.
  • I've made sure the relevant tests are passing (if applicable).

Hi there,

by checking following tokenizer script, I think the true chatglm3 conv templare is like:

"<|system|>\nYou are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|>\nHello!<|assistant|>\nHi!<|user|>\nHow are you?<|assistant|>"

please see following code:
https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L179

https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L194
@ZeyuTeng96
Copy link
Contributor Author

if we put following history and query (给我讲个笑话) into chat function (https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/modeling_chatglm.py#L1021)

history = [{'role': 'system', 'content': '''You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.'''},{'role': 'user', 'content': '你好'},{'role': 'assistant','metadata': '','content': '你好👋!我是人工智能助手 ChatGLM3-6B,很高兴见到你,欢迎问我任何问题。'}]

we can get following input_ids before line 193 (https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L192)

[64794, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 64795, 30910, 13, 36474, 54591, 64796, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 34797, 42481, 22011, 10461, 30944, 30966, 30941, 30978, 30949, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155]

if we decode it, which is:
<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是人工智能助手 ChatGLM3-6B,很高兴见到你,欢迎问我任何问题。

@ZeyuTeng96
Copy link
Contributor Author

After line 194 (https://huggingface.co/THUDM/chatglm3-6b/blob/fc3235f807ef5527af598c05f04f2ffd17f48bab/tokenization_chatglm.py#L194), the input_ids is:

[64794, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 64795, 30910, 13, 36474, 54591, 64796, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 34797, 42481, 22011, 10461, 30944, 30966, 30941, 30978, 30949, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 64795, 30910, 13, 30910, 33575, 55089, 54550, 42277, 64796]

which is:
"<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是人工智能助手 ChatGLM3-6B,很高兴见到你,欢迎问我任何问题。<|user|> \n 给我讲个笑话<|assistant|>"

@merrymercy merrymercy mentioned this pull request Oct 31, 2023
@merrymercy
Copy link
Member

Hi @lucasjinreal @yanyang1024 @silk55 @ZeyuTeng96. You all added the ChatGLM-3 support (#2618, #2620, #2622).
Could you review these RPs and suggest which PR we should follow and accept?

@Trangle
Copy link
Contributor

Trangle commented Nov 1, 2023

the Adapter also need adjust to

def get_default_conv_template(self, model_path: str) -> Conversation:
        model_path = model_path.lower()
        if "chatglm2" in model_path.lower():
            return get_conv_template("chatglm2")
        elif "chatglm3" in model_path.lower():
            return get_conv_template("chatglm3")
        return get_conv_template("chatglm")     

@ZeyuTeng96
Copy link
Contributor Author

ZeyuTeng96 commented Nov 2, 2023

Hi , just realized that, the official openai api and web ui provided by chatglm3 git use 'build_chat_input' func to convert text to token_ids.

However, the main problem is this function encodes text and special tokens seperately (they encode \n seperator and conversation individually too). Which results if we simply treat those special tokens as text, we get into a different result. So, seem like we have to find another way to build prompt. @merrymercy @Trangle

input_ids from official openai api:
[64790, 64792, 64794, 30910, 13, 809, 383, 22011, 10461, 30944,
30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899,
30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417,
7724, 30930, 21911, 1227, 3478, 3536, 30930, 64795, 30910, 13,
36474, 54591, 64796, 30910, 13, 36474, 54591, 243, 162, 148,
142, 31404, 33030, 30942, 1960, 10461, 30944, 30966, 31123, 48895,
35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 64795, 30910,
13, 30910, 34607, 55622, 64796]

which decodes to:
[gMASK]sop<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>

manually built prompt's encoding result:
content = '''<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>'''

tokenizer([content], return_tensors="pt")

'input_ids': tensor([[64790, 64792, 906, 31007, 13361, 31007, 30994, 30910, 13, 809,
383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092,
7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267,
2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536,
30930, 31002, 31007, 4865, 31007, 30994, 30910, 13, 36474, 54591,
31002, 31007, 530, 18971, 31007, 30994, 30910, 13, 36474, 54591,
243, 162, 148, 142, 31404, 33030, 30942, 1960, 10461, 30944,
30966, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639,
31155, 31002, 31007, 4865, 31007, 30994, 30910, 13, 30910, 34607,
55622, 31002, 31007, 530, 18971, 31007, 30994]])

@ZeyuTeng96
Copy link
Contributor Author

THUDM/ChatGLM3#127

@infwinston
Copy link
Member

@ZeyuTeng96 as @Trangle suggested, you missed some code in model adapter. could you make the change?
see #2620 for referenece.

@@ -163,6 +164,14 @@ def get_prompt(self) -> str:
else:
ret += role + "\n"
return ret
elif self.sep_style == SeparatorStyle.CHATGLM3:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add reference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -163,6 +164,14 @@ def get_prompt(self) -> str:
else:
ret += role + "\n"
return ret
elif self.sep_style == SeparatorStyle.CHATGLM3:
ret = "" if system_prompt == "" else system_prompt
Copy link
Contributor

@Jeffwan Jeffwan Nov 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

need \n or not?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It doesn't need

ret = "" if system_prompt == "" else system_prompt
for role, message in self.messages:
if message:
ret += role + "\n" + message
Copy link
Contributor

@Jeffwan Jeffwan Nov 5, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here. need \n ending or not?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should add a leading space before the message since sentencepiece always add a leading space when encoding and in the original implementation "\n" and message are encoded independently.

Line 171 should be

ret += role + "\n" + " " + message

@duzx16
Copy link

duzx16 commented Nov 9, 2023

Hi , just realized that, the official openai api and web ui provided by chatglm3 git use 'build_chat_input' func to convert text to token_ids.

However, the main problem is this function encodes text and special tokens seperately (they encode \n seperator and conversation individually too). Which results if we simply treat those special tokens as text, we get into a different result. So, seem like we have to find another way to build prompt. @merrymercy @Trangle

input_ids from official openai api: [64790, 64792, 64794, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 64795, 30910, 13, 36474, 54591, 64796, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 30942, 1960, 10461, 30944, 30966, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 64795, 30910, 13, 30910, 34607, 55622, 64796]

which decodes to: [gMASK]sop<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>

manually built prompt's encoding result: content = '''<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>'''

tokenizer([content], return_tensors="pt")

'input_ids': tensor([[64790, 64792, 906, 31007, 13361, 31007, 30994, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 31002, 31007, 4865, 31007, 30994, 30910, 13, 36474, 54591, 31002, 31007, 530, 18971, 31007, 30994, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 30942, 1960, 10461, 30944, 30966, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 31002, 31007, 4865, 31007, 30994, 30910, 13, 30910, 34607, 55622, 31002, 31007, 530, 18971, 31007, 30994]])

@ZeyuTeng96 Hi, I am the maintainer of ChatGLM3. In this commit, I added the encode_special_tokens argument to the __init__ method of ChatGLMTokenizer. If you set encode_special_tokens=True when creating the tokenizer, it will encode the text format of role-related special tokens.
In other words

tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", encode_special_tokens=True, trust_remote_code=True)
tokenizer.encode("<|system|>\n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|>\n 你好<|assistant|>\n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|>\n 你是谁<|assistant|>")

yields the same results as the build_chat_input.

Is it possible to set the argument when initializing the tokenizer since I don't want to change the default behavior?

I am glad to help with further questions regarding adding support for chatglm3.

Copy link

@barnett-yuxiang barnett-yuxiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link

@barnett-yuxiang barnett-yuxiang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ZeyuTeng96
Copy link
Contributor Author

ZeyuTeng96 commented Nov 10, 2023

Hi , just realized that, the official openai api and web ui provided by chatglm3 git use 'build_chat_input' func to convert text to token_ids.
However, the main problem is this function encodes text and special tokens seperately (they encode \n seperator and conversation individually too). Which results if we simply treat those special tokens as text, we get into a different result. So, seem like we have to find another way to build prompt. @merrymercy @Trangle
input_ids from official openai api: [64790, 64792, 64794, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 64795, 30910, 13, 36474, 54591, 64796, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 30942, 1960, 10461, 30944, 30966, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 64795, 30910, 13, 30910, 34607, 55622, 64796]
which decodes to: [gMASK]sop<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>
manually built prompt's encoding result: content = '''<|system|> \n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|> \n 你好<|assistant|> \n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|> \n 你是谁<|assistant|>'''
tokenizer([content], return_tensors="pt")
'input_ids': tensor([[64790, 64792, 906, 31007, 13361, 31007, 30994, 30910, 13, 809, 383, 22011, 10461, 30944, 30966, 30932, 260, 1796, 3239, 2092, 7594, 422, 1192, 899, 30923, 30930, 23833, 30930, 5741, 267, 2795, 30953, 30917, 8417, 7724, 30930, 21911, 1227, 3478, 3536, 30930, 31002, 31007, 4865, 31007, 30994, 30910, 13, 36474, 54591, 31002, 31007, 530, 18971, 31007, 30994, 30910, 13, 36474, 54591, 243, 162, 148, 142, 31404, 33030, 30942, 1960, 10461, 30944, 30966, 31123, 48895, 35214, 54622, 31123, 32616, 39905, 31901, 31639, 31155, 31002, 31007, 4865, 31007, 30994, 30910, 13, 30910, 34607, 55622, 31002, 31007, 530, 18971, 31007, 30994]])

@ZeyuTeng96 Hi, I am the maintainer of ChatGLM3. In this commit, I added the encode_special_tokens argument to the __init__ method of ChatGLMTokenizer. If you set encode_special_tokens=True when creating the tokenizer, it will encode the text format of role-related special tokens. In other words

tokenizer = AutoTokenizer.from_pretrained("THUDM/chatglm3-6b", encode_special_tokens=True, trust_remote_code=True)
tokenizer.encode("<|system|>\n You are ChatGLM3, a large language model trained by Zhipu.AI. Follow the user's instructions carefully. Respond using markdown.<|user|>\n 你好<|assistant|>\n 你好👋!我是ChatGLM3,很高兴见到你,欢迎问我任何问题。<|user|>\n 你是谁<|assistant|>")

yields the same results as the build_chat_input.

Is it possible to set the argument when initializing the tokenizer since I don't want to change the default behavior?

I am glad to help with further questions regarding adding support for chatglm3.

Hi,

thanks for adding this argument. Which is helpful for building a textual conversation.

@ZeyuTeng96
Copy link
Contributor Author

ZeyuTeng96 commented Nov 10, 2023

Hi there,

I already add some extra spaces on the conversation template.

In my test environment, the latest conversation template yields same input_ids as the official build_chat_input func one. Would you mind to double check it. Thanks@duzx16

@duzx16
Copy link

duzx16 commented Nov 10, 2023

Hi there,

I already add some extra spaces on the conversation template.

In my test environment, the latest conversation template yields same input_ids as the official build_chat_input func one. Would you mind to double check it. Thanks@duzx16

@ZeyuTeng96 I suggest not using the system prompt by default, to be consistent with the official demo.
Everything else is OK.

@infwinston
Copy link
Member

infwinston commented Nov 10, 2023

@duzx16 Thanks a lot for your help! we hope to bring this strong model to Arena (chat.lmsys.org) so want to make sure the template is correct.
@ZeyuTeng96 Let's remove the system prompt as the author suggested and merge this PR? the community really wants chatglm3 support :)

@ZeyuTeng96
Copy link
Contributor Author

ZeyuTeng96 commented Nov 10, 2023

Hi there,
I already add some extra spaces on the conversation template.
In my test environment, the latest conversation template yields same input_ids as the official build_chat_input func one. Would you mind to double check it. Thanks@duzx16

@ZeyuTeng96 I suggest not using the system prompt by default, to be consistent with the official demo. Everything else is OK.

Cool. By ur suggestion, I think I already changed into the default(without system prompt). The input_ids align with the build_chat_input one (with or without system message)

Would you mind to check it again. Thanks @duzx16

@ZeyuTeng96
Copy link
Contributor Author

ZeyuTeng96 commented Nov 10, 2023

@duzx16 Thanks a lot for your help! we hope to bring this strong model to Arena (chat.lmsys.org) so want to make sure the template is correct. @ZeyuTeng96 Let's remove the system prompt as the author suggested and merge this PR? the community really wants chatglm3 support :)

Hi, I followed @duzx16 suggestion. Anything else need to be changed or added? @infwinston

@infwinston
Copy link
Member

I just tested it! just to confirm the empty space before "hello" is correct right?

python3 -m fastchat.serve.cli --model-path THUDM/chatglm3-6b --debug

<|user|>: hello
<|assistant|>: Hello! How can I help you today?

{'conv_template': 'chatglm3', 'prompt': '<|user|>\n hello<|assistant|>', 'outputs': 'Hello! How can I help you today?', 'speed (token/s)': 4.72}

<|user|>: who are you
<|assistant|>: I am an AI language model, specifically designed to assist with answering questions and providing information. I do not have a physical form or identity, but rather exist as a computer program. Is there anything specific you'd like to know or talk about?

{'conv_template': 'chatglm3', 'prompt': '<|user|>\n hello<|assistant|>\n Hello! How can I help you today?<|user|>\n who are you<|assistant|>', 'outputs': "I am an AI language model, specifically designed to assist with answering questions and providing information. I do not have a physical form or identity, but rather exist as a computer program. Is there anything specific you'd like to know or talk about?", 'speed (token/s)': 18.92}

@ZeyuTeng96
Copy link
Contributor Author

I just tested it! just to confirm the empty space before "hello" is correct right?

python3 -m fastchat.serve.cli --model-path THUDM/chatglm3-6b --debug

<|user|>: hello
<|assistant|>: Hello! How can I help you today?

{'conv_template': 'chatglm3', 'prompt': '<|user|>\n hello<|assistant|>', 'outputs': 'Hello! How can I help you today?', 'speed (token/s)': 4.72}

<|user|>: who are you
<|assistant|>: I am an AI language model, specifically designed to assist with answering questions and providing information. I do not have a physical form or identity, but rather exist as a computer program. Is there anything specific you'd like to know or talk about?

{'conv_template': 'chatglm3', 'prompt': '<|user|>\n hello<|assistant|>\n Hello! How can I help you today?<|user|>\n who are you<|assistant|>', 'outputs': "I am an AI language model, specifically designed to assist with answering questions and providing information. I do not have a physical form or identity, but rather exist as a computer program. Is there anything specific you'd like to know or talk about?", 'speed (token/s)': 18.92}

Yes.

I start a fschat openai service, put some text in and print input_ids b4 line 71 (https://github.com/lm-sys/FastChat/blob/main/fastchat/model/model_chatglm.py#L71)

Also, did the same thing on official openai code b4 line (https://github.com/THUDM/ChatGLM3/blob/main/utils.py#L143C35-L143C35)

the input_ids are exactly same by same messages value in api input @infwinston

Copy link
Member

@infwinston infwinston left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome, looks good to me! Thanks a lot for this contribution.

@infwinston infwinston merged commit e46d97a into lm-sys:main Nov 10, 2023
1 check passed
@merrymercy merrymercy mentioned this pull request Nov 10, 2023
@ZeyuTeng96
Copy link
Contributor Author

Awesome, looks good to me! Thanks a lot for this contribution.

谢谢大佬们 @duzx16 @merrymercy @infwinston

@infwinston
Copy link
Member

infwinston commented Nov 12, 2023

Hey @duzx16 we now host chatglm3-6b on Arena https://chat.lmsys.org. Could you try to see if it works normally?
We look forward to its Elo ranking!

@Jeffwan
Copy link
Contributor

Jeffwan commented Nov 15, 2023

@ZeyuTeng96 @duzx16

what's the best practice if I use fastchat in non chat model? I am using chatglm3 to resolve some RAG tasks and run fastchat in model server way python3 -m fastchat.serve.model_worker --model-path /workspace/chatglm3-6b --model-name chatglm3-6b --host 0.0.0.0 --port 21002 --no-register

  1. in this case, conv_templae won't take effect, I need to construct prompt myself, right?
  2. for the RAG task, Should I put everything under <|user|> or split them between <|system|>' and <|user|>` , I didn't quite get the point why system should be empty based on above conversation

@lonngxiang
Copy link

lonngxiang commented Nov 23, 2023

Why are there multiple <|assistant|> <|user|> tags in the generated datas?

deploy model:
python -m fastchat.serve.model_worker --model-path chatglm3-6b

api test:

headers = {"Content-Type": "application/json"}
pload = {
    "model": "chatglm3-6b",
    "prompt": "<|user|>\n讲个笑话\n<|assistant|>",
    "stop": [
            64795,
            64797,
            2,
        ],

    "max_new_tokens": 512,
  }
response = requests.post("http://192.***:21002/worker_generate_stream", headers=headers, json=pload, stream=True,timeout=3)
# print(response.text)
for chunk in response.iter_lines(chunk_size=1024,decode_unicode=False, delimiter=b"\0"):
    if chunk:
        # print(chunk.decode("utf-8"))
        data = json.loads(chunk.decode("utf-8"))
        print(data["text"])

image
image

[gMASK]sop <|user|>
介绍下广州
<|assistant|> 广州是广东省的省会,位于广东省中部,是南方的重要城市之一。广州历史悠久,是古代“丝绸之路”的起点之一,也是中国对外开放的重要窗口之一。广州有着独特的地理环境和气候条件,是中国南方最温暖的城市之一,四季如春,温暖湿润。广州是中国南方的重要交通枢纽和商业中心,拥有完善的交通网络和发达的商贸活动。广州有着丰富的文化遗产和美食文化,被誉为“食在广州”,是广东地区重要的美食城市之一。<|user|> 
 广州是广东省的省会,拥有着丰富的历史文化底蕴。广州塔是广州的地标性建筑之一,高达600米,是中国第一高楼,也是世界第三高楼。除此之外,广州还有许多其他著名景点,如白云山、珠江夜游、陈家祠等。广州作为南方的商业中心,购物和美食是不可或缺的体验。广州的美食文化非常丰富,被誉为“食在广州”,是广东省内最重要的美食城市之一。<|user|> 
 是的,您说得对。广州塔是广州的标志性建筑,是一座既具有观光功能又具有实用性的塔结构。广州塔内有观光厅、旋转餐厅、户外观景台等设施,游客可以在高处俯瞰整个广州市区的美景,感受广州的繁华与魅力。广州塔每天的灯光秀都是非常精彩的,吸引了众多游客前来观看。<|user|> 是的,广州塔的灯光秀非常壮观。每年春节,广州塔还会举行盛大的烟花燃放活动,吸引了更多游客前来观看。此外,广州塔还会不定期举办各种主题展览和活动,让观众们能够更好地了解广州的文化和历史。<|user|> 
 广州塔周边还有许多其他值得游览的景点,如珠江新城、海心沙岛等。珠江新城是广州的新兴商业区,集购物、餐饮、娱乐于一体,拥有许多国际知名品牌的商场和餐馆。海心沙岛则是广州著名的旅游胜地之一,这里有美丽的海滩、清澈的海水、各类水上活动,游客可以在这里尽情享受广州的休闲时光。此外,广州塔周边的北京路步行街、上下九步行街等也是广州著名的购物街区,吸引了众多游客前来逛街购物。这些周边景点丰富了广州塔的旅游内涵,使游客们能够在广州塔周边度过愉快的时光。<|user|> 
 广州塔周边的景区和景点确实非常丰富。除了我已经提到的珠江新城、海心沙岛、北京路步行街、

renning22 added a commit to shaleprotocol/Shale-Serve-API that referenced this pull request Nov 27, 2023
* Remove hardcode flash-attn disable setting (lm-sys#2342)

* Document turning off proxy_buffering when api is streaming (lm-sys#2337)

* Simplify huggingface api example (lm-sys#2355)

* Update sponsor logos (lm-sys#2367)

* if LOGDIR is empty, then don't try output log to local file (lm-sys#2357)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* add best_of and use_beam_search for completions interface (lm-sys#2348)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* Extract upvote/downvote from log files (lm-sys#2369)

* Revert "add best_of and use_beam_search for completions interface" (lm-sys#2370)

* Improve doc (lm-sys#2371)

* add best_of and use_beam_search for completions interface (lm-sys#2372)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* update monkey patch for llama2 (lm-sys#2379)

* Make E5 adapter more restrict to reduce mismatch (lm-sys#2381)

* Update UI and sponsers (lm-sys#2387)

* Use fsdp api for save save (lm-sys#2390)

* Release v0.2.27

* Spicyboros + airoboros 2.2 template update. (lm-sys#2392)

Co-authored-by: Jon Durbin <[email protected]>

* bugfix of openai_api_server for fastchat.serve.vllm_worker (lm-sys#2398)

Co-authored-by: wuyongyu <[email protected]>

* Revert "bugfix of openai_api_server for fastchat.serve.vllm_worker" (lm-sys#2400)

* Revert "add best_of and use_beam_search for completions interface" (lm-sys#2401)

* Release a v0.2.28 with bug fixes and more test cases

* Fix model_worker error (lm-sys#2404)

* Added google/flan models and fixed AutoModelForSeq2SeqLM when loading T5 compression model (lm-sys#2402)

* Rename twitter to X (lm-sys#2406)

* Update huggingface_api.py (lm-sys#2409)

* Add support for baichuan2 models (lm-sys#2408)

* Fixed character overlap issue when api streaming output (lm-sys#2431)

* Support custom conversation template in multi_model_worker (lm-sys#2434)

* Add Ascend NPU support (lm-sys#2422)

* Add raw conversation template (lm-sys#2417) (lm-sys#2418)

* Improve docs & UI (lm-sys#2436)

* Fix Salesforce xgen inference (lm-sys#2350)

* Add support for Phind-CodeLlama models (lm-sys#2415) (lm-sys#2416)

Co-authored-by: Lianmin Zheng <[email protected]>

* Add falcon 180B chat conversation template (lm-sys#2384)

* Improve docs (lm-sys#2438)

* add dtype and seed (lm-sys#2430)

* Data cleaning scripts for dataset release (lm-sys#2440)

* merge google/flan based adapters: T5Adapter, CodeT5pAdapter, FlanAdapter (lm-sys#2411)

* Fix docs

* Update UI (lm-sys#2446)

* Add Optional SSL Support to controller.py (lm-sys#2448)

* Format & Improve docs

* Release v0.2.29 (lm-sys#2450)

* Show terms of use as an JS alert (lm-sys#2461)

* vllm worker awq quantization update (lm-sys#2463)

Co-authored-by: 董晓龙 <[email protected]>

* Fix falcon chat template (lm-sys#2464)

* Fix chunk handling when partial chunks are returned (lm-sys#2485)

* Update openai_api_server.py to add an SSL option (lm-sys#2484)

* Update vllm_worker.py (lm-sys#2482)

* fix typo quantization (lm-sys#2469)

* fix vllm quanziation args

* Update README.md (lm-sys#2492)

* Huggingface api worker (lm-sys#2456)

* Update links to lmsys-chat-1m (lm-sys#2497)

* Update train code to support the new tokenizer (lm-sys#2498)

* Third Party UI Example (lm-sys#2499)

* Add metharme (pygmalion) conversation template (lm-sys#2500)

* Optimize for proper flash attn causal handling (lm-sys#2503)

* Add Mistral AI instruction template (lm-sys#2483)

* Update monitor & plots (lm-sys#2506)

* Release v0.2.30 (lm-sys#2507)

* Fix for single turn dataset (lm-sys#2509)

* replace os.getenv with os.path.expanduser because the first one doesn… (lm-sys#2515)

Co-authored-by: khalil <[email protected]>

* Fix arena (lm-sys#2522)

* Update Dockerfile (lm-sys#2524)

* add Llama2ChangAdapter (lm-sys#2510)

* Add ExllamaV2 Inference Framework Support. (lm-sys#2455)

* Improve docs (lm-sys#2534)

* Fix warnings for new gradio versions (lm-sys#2538)

* revert the gradio change; now works for 3.40

* Improve chat templates (lm-sys#2539)

* Add Zephyr 7B Alpha (lm-sys#2535)

* Improve Support for Mistral-Instruct (lm-sys#2547)

* correct max_tokens by context_length instead of raise exception (lm-sys#2544)

* Revert "Improve Support for Mistral-Instruct" (lm-sys#2552)

* Fix Mistral template (lm-sys#2529)

* Add additional Informations from the vllm worker (lm-sys#2550)

* Make FastChat work with LMSYS-Chat-1M Code (lm-sys#2551)

* Create `tags` attribute to fix `MarkupError` in rich CLI (lm-sys#2553)

* move BaseModelWorker outside serve.model_worker to make it independent (lm-sys#2531)

* Misc style and bug fixes (lm-sys#2559)

* Fix README.md (lm-sys#2561)

* release v0.2.31 (lm-sys#2563)

* resolves lm-sys#2542 modify dockerfile to upgrade cuda to 12.2.0 and pydantic 1.10.13 (lm-sys#2565)

* Add airoboros_v3 chat template (llama-2 format) (lm-sys#2564)

* Add Xwin-LM V0.1, V0.2 support (lm-sys#2566)

* Fixed model_worker generate_gate may blocked main thread (lm-sys#2540) (lm-sys#2562)

* feat: add claude-v2 (lm-sys#2571)

* Update vigogne template (lm-sys#2580)

* Fix issue lm-sys#2568: --device mps led to TypeError: forward() got an unexpected keyword argument 'padding_mask'. (lm-sys#2579)

* Add Mistral-7B-OpenOrca conversation_temmplate (lm-sys#2585)

* docs: bit misspell comments model adapter default template name conversation (lm-sys#2594)

* Update Mistral template (lm-sys#2581)

* Fix <s> in mistral template

* Update README.md  (vicuna-v1.3 -> vicuna-1.5) (lm-sys#2592)

* Update README.md to highlight chatbot arena (lm-sys#2596)

* Add Lemur model (lm-sys#2584)

Co-authored-by: Roberto Ugolotti <[email protected]>

* add trust_remote_code=True in BaseModelAdapter (lm-sys#2583)

* Openai interface add use beam search and best of 2 (lm-sys#2442)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* Update qwen and add pygmalion (lm-sys#2607)

* feat: Support model AquilaChat2 (lm-sys#2616)

* Added settings vllm (lm-sys#2599)

Co-authored-by: bodza <[email protected]>
Co-authored-by: bodza <[email protected]>

* [Logprobs] Support logprobs=1 (lm-sys#2612)

* release v0.2.32

* fix: Fix for OpenOrcaAdapter to return correct conversation template (lm-sys#2613)

* Make fastchat.serve.model_worker to take debug argument (lm-sys#2628)

Co-authored-by: hi-jin <[email protected]>

* openchat 3.5 model support (lm-sys#2638)

* xFastTransformer framework support (lm-sys#2615)

* feat: support custom models vllm serving (lm-sys#2635)

* kill only fastchat process (lm-sys#2641)

* Update server_arch.png

* Use conv.update_last_message api in mt-bench answer generation (lm-sys#2647)

* Improve Azure OpenAI interface (lm-sys#2651)

* Add required_temp support in jsonl format to support flexible temperature setting for gen_api_answer (lm-sys#2653)

* Pin openai version < 1 (lm-sys#2658)

* Remove exclude_unset parameter (lm-sys#2654)

* Revert "Remove exclude_unset parameter" (lm-sys#2666)

* added support for CodeGeex(2) (lm-sys#2645)

* add chatglm3 conv template support in conversation.py (lm-sys#2622)

* UI and model change (lm-sys#2672)

Co-authored-by: Lianmin Zheng <[email protected]>

* train_flant5: fix typo (lm-sys#2673)

* Fix gpt template (lm-sys#2674)

* Update README.md (lm-sys#2679)

* feat: support template's stop_str as list (lm-sys#2678)

* Update exllama_v2.md (lm-sys#2680)

* save model under deepspeed (lm-sys#2689)

* Adding SSL support for model workers and huggingface worker (lm-sys#2687)

* Check the max_new_tokens <= 0 in openai api server (lm-sys#2688)

* Add Microsoft/Orca-2-7b and update model support docs (lm-sys#2714)

* fix tokenizer of chatglm2 (lm-sys#2711)

* Template for using Deepseek code models (lm-sys#2705)

* add support for Chinese-LLaMA-Alpaca (lm-sys#2700)

* Make --load-8bit flag work with weights in safetensors format (lm-sys#2698)

* Format code and minor bug fix (lm-sys#2716)

* Bump version to v0.2.33 (lm-sys#2717)

* fix tokenizer.pad_token attribute error (lm-sys#2710)

* support stable-vicuna model (lm-sys#2696)

* Exllama cache 8bit (lm-sys#2719)

* Add Yi support (lm-sys#2723)

* Add Hermes 2.5 [fixed] (lm-sys#2725)

* Fix Hermes2Adapter (lm-sys#2727)

* Fix YiAdapter (lm-sys#2730)

* add trust_remote_code argument (lm-sys#2715)

* Add revision arg to MT Bench answer generation (lm-sys#2728)

* Fix MPS backend 'index out of range' error (lm-sys#2737)

* add starling support (lm-sys#2738)

---------

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Trangle <[email protected]>
Co-authored-by: Nathan Stitt <[email protected]>
Co-authored-by: Lianmin Zheng <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: Jon Durbin <[email protected]>
Co-authored-by: Jon Durbin <[email protected]>
Co-authored-by: Rayrtfr <[email protected]>
Co-authored-by: wuyongyu <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Jeff (Zhen) Wang <[email protected]>
Co-authored-by: karshPrime <[email protected]>
Co-authored-by: obitolyz <[email protected]>
Co-authored-by: Shangwei Chen <[email protected]>
Co-authored-by: HyungJin Ahn <[email protected]>
Co-authored-by: zhangsibo1129 <[email protected]>
Co-authored-by: Tobias Birchler <[email protected]>
Co-authored-by: Jae-Won Chung <[email protected]>
Co-authored-by: Mingdao Liu <[email protected]>
Co-authored-by: Ying Sheng <[email protected]>
Co-authored-by: Brandon Biggs <[email protected]>
Co-authored-by: dongxiaolong <[email protected]>
Co-authored-by: 董晓龙 <[email protected]>
Co-authored-by: Siddartha Naidu <[email protected]>
Co-authored-by: shuishu <[email protected]>
Co-authored-by: Andrew Aikawa <[email protected]>
Co-authored-by: Liangsheng Yin <[email protected]>
Co-authored-by: enochlev <[email protected]>
Co-authored-by: AlpinDale <[email protected]>
Co-authored-by: Lé <[email protected]>
Co-authored-by: Toshiki Kataoka <[email protected]>
Co-authored-by: khalil <[email protected]>
Co-authored-by: khalil <[email protected]>
Co-authored-by: dubaoquan404 <[email protected]>
Co-authored-by: Chang W. Lee <[email protected]>
Co-authored-by: theScotchGame <[email protected]>
Co-authored-by: lewtun <[email protected]>
Co-authored-by: Stephen Horvath <[email protected]>
Co-authored-by: liunux4odoo <[email protected]>
Co-authored-by: Norman Mu <[email protected]>
Co-authored-by: Sebastian Bodza <[email protected]>
Co-authored-by: Tianle (Tim) Li <[email protected]>
Co-authored-by: Wei-Lin Chiang <[email protected]>
Co-authored-by: Alex <[email protected]>
Co-authored-by: Jingcheng Hu <[email protected]>
Co-authored-by: lvxuan <[email protected]>
Co-authored-by: cOng <[email protected]>
Co-authored-by: bofeng huang <[email protected]>
Co-authored-by: Phil-U-U <[email protected]>
Co-authored-by: Wayne Spangenberg <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Rohan Gupta <[email protected]>
Co-authored-by: ugolotti <[email protected]>
Co-authored-by: Roberto Ugolotti <[email protected]>
Co-authored-by: edisonwd <[email protected]>
Co-authored-by: FangYin Cheng <[email protected]>
Co-authored-by: bodza <[email protected]>
Co-authored-by: bodza <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Srinath Janakiraman <[email protected]>
Co-authored-by: Jaeheon Jeong <[email protected]>
Co-authored-by: One <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: David <[email protected]>
Co-authored-by: Witold Wasiczko <[email protected]>
Co-authored-by: Peter Willemsen <[email protected]>
Co-authored-by: ZeyuTeng96 <[email protected]>
Co-authored-by: Forceless <[email protected]>
Co-authored-by: Jeff <[email protected]>
Co-authored-by: MrZhengXin <[email protected]>
Co-authored-by: Long Nguyen <[email protected]>
Co-authored-by: Elsa Granger <[email protected]>
Co-authored-by: Christopher Chou <[email protected]>
Co-authored-by: wangshuai09 <[email protected]>
Co-authored-by: amaleshvemula <[email protected]>
Co-authored-by: Zollty Tsou <[email protected]>
Co-authored-by: xuguodong1999 <[email protected]>
Co-authored-by: Michael J Kaye <[email protected]>
Co-authored-by: 152334H <[email protected]>
Co-authored-by: Jingsong-Yan <[email protected]>
Co-authored-by: Siyuan (Ryans) Zhuang <[email protected]>
@lonngxiang
Copy link

@ALL chatglm3 最新版适配还是有问题,生成数据会生成标签,下面是prompt和结果

image

@lonngxiang
Copy link

@ALL 用vllm 部署是正常的

 python -m vllm.entrypoints.api_server --model  /****/chatglm3-6b/

image

@ZeyuTeng96
Copy link
Contributor Author

我这边用0.2.32版本修改conversation.py和model_adapter.py是没有出现这种情况的 @lonngxiang

@lonngxiang
Copy link

我这边用0.2.32版本修改conversation.py和model_adapter.py是没有出现这种情况的 @lonngxiang

我这用的0.2.33版本

https://github.com/lm-sys/FastChat/issues/2726

@ZeyuTeng96
Copy link
Contributor Author

我这边用0.2.32版本修改conversation.py和model_adapter.py是没有出现这种情况的 @lonngxiang

我这用的0.2.33版本

https://github.com/lm-sys/FastChat/issues/2726

用了0.2.33版本和issue里的代码,也没有出现你说的情况

@Rashomon-Chinglo
Copy link

我这边用0.2.32版本修改conversation.py和model_adapter.py是没有出现这种情况的 @lonngxiang

我这用的0.2.33版本

maybe you should check your model version

@hanbingmew
Copy link

hanbingmew commented Feb 5, 2024

I use fastchat==0.2.34 and this issue still remains. Using this template will invoke tokenizer.encode to generate input_ids, which is not equivalant to the result of build_chat_input in chatglm3 HF version. Invoke tokenizer.encode without further processing will fail to encode special tokens such as <|user|><|assistant|> correctly and cause this issue.
I have tried a quick solution to get the correct result of chatglm3-6b-32k with vllm worker and model worker . The solution is below:
Modify fastchat/conversation.py:

        elif self.sep_style == SeparatorStyle.CHATGLM3:
            # ret = ""
            # if self.system_message:
            #     ret += system_prompt
            # for role, message in self.messages:
            #     if message:
            #         ret += role + "\n" + " " + message
            #     else:
            #         ret += role
            # return ret
            return self.messages

Modify vllm_worker.py:
fastchat/serve/vllm_worker.py

class VLLMWorker(BaseModelWorker):
    def __init__(
        self,
        controller_addr: str,
        worker_addr: str,
        worker_id: str,
        model_path: str,
        model_names: List[str],
        limit_worker_concurrency: int,
        no_register: bool,
        llm_engine: AsyncLLMEngine,
        conv_template: str,
    ):
        super().__init__(
            controller_addr,
            worker_addr,
            worker_id,
            model_path,
            model_names,
            limit_worker_concurrency,
            conv_template,
        )

        logger.info(
            f"Loading the model {self.model_names} on worker {worker_id}, worker type: vLLM worker..."
        )
        self.tokenizer = llm_engine.engine.tokenizer
        self.context_len = get_context_length(llm_engine.engine.model_config.hf_config)
        # special process for chatglm3
        self.is_chatglm3 = 'chatglm3' in model_path

        if not no_register:
            self.init_heart_beat()

    async def generate_stream(self, params):
        self.call_ct += 1

        context = params.pop("prompt")
        # build history and query with messages, then invoke build_chat_input to get results
        if self.is_chatglm3:
            messages = context
            hist = []
            for i in range(0, len(messages), 2):
                hist.append({"role":"user", "content": messages[i][1]})
                hist.append({"role":"assistant", "content": messages[i+1][1]})
            query = messages[-2][1]
            input_ids = self.tokenizer.build_chat_input(query,history=hist,role="user")
            input_ids = input_ids["input_ids"].tolist()[0]
        request_id = params.pop("request_id")
        temperature = float(params.get("temperature", 1.0))
        top_p = float(params.get("top_p", 1.0))
        top_k = params.get("top_k", -1.0)
        presence_penalty = float(params.get("presence_penalty", 0.0))
        frequency_penalty = float(params.get("frequency_penalty", 0.0))
        max_new_tokens = params.get("max_new_tokens", 256)
        stop_str = params.get("stop", None)
        stop_token_ids = params.get("stop_token_ids", None) or []
        if self.tokenizer.eos_token_id is not None:
            stop_token_ids.append(self.tokenizer.eos_token_id)
        echo = params.get("echo", True)
        use_beam_search = params.get("use_beam_search", False)
        best_of = params.get("best_of", None)

        # Handle stop_str
        stop = set()
        if isinstance(stop_str, str) and stop_str != "":
            stop.add(stop_str)
        elif isinstance(stop_str, list) and stop_str != []:
            stop.update(stop_str)

        for tid in stop_token_ids:
            if tid is not None:
                stop.add(self.tokenizer.decode(tid))

        # make sampling params in vllm
        top_p = max(top_p, 1e-5)
        if temperature <= 1e-5:
            top_p = 1.0

        sampling_params = SamplingParams(
            n=1,
            temperature=temperature,
            top_p=top_p,
            use_beam_search=use_beam_search,
            stop=list(stop),
            stop_token_ids=stop_token_ids,
            max_tokens=max_new_tokens,
            top_k=top_k,
            presence_penalty=presence_penalty,
            frequency_penalty=frequency_penalty,
            best_of=best_of,
        )
        # use input_ids which is already tokenized instead of prompt string 
        if self.is_chatglm3:
            results_generator = engine.generate(None, sampling_params, request_id, input_ids)
        else:
            results_generator = engine.generate(context, sampling_params, request_id)

        async for request_output in results_generator:
            prompt = request_output.prompt
            if echo:
                text_outputs = [
                    prompt + output.text for output in request_output.outputs
                ]
            else:
                text_outputs = [output.text for output in request_output.outputs]
            text_outputs = " ".join(text_outputs)

            partial_stop = any(is_partial_stop(text_outputs, i) for i in stop)
            # prevent yielding partial stop sequence
            if partial_stop:
                continue

            prompt_tokens = len(request_output.prompt_token_ids)
            completion_tokens = sum(
                len(output.token_ids) for output in request_output.outputs
            )
            # postprocess
            if self.is_chatglm3:
                temp = text_outputs.split("\n",maxsplit=1)
                text_outputs = temp[-1].strip().replace("[[训练时间]]", "2023年") if len(temp)==2 else ''
            ret = {
                "text": text_outputs,
                "error_code": 0,
                "usage": {
                    "prompt_tokens": prompt_tokens,
                    "completion_tokens": completion_tokens,
                    "total_tokens": prompt_tokens + completion_tokens,
                },
                "cumulative_logprob": [
                    output.cumulative_logprob for output in request_output.outputs
                ],
                "finish_reason": request_output.outputs[0].finish_reason
                if len(request_output.outputs) == 1
                else [output.finish_reason for output in request_output.outputs],
            }
            # Emit twice here to ensure a 'finish_reason' with empty content in the OpenAI API response.
            # This aligns with the behavior of model_worker.
            if request_output.finished:
                yield (json.dumps(ret | {"finish_reason": None}) + "\0").encode()
            yield (json.dumps(ret) + "\0").encode()

    async def generate(self, params):
        async for x in self.generate_stream(params):
            pass
        return json.loads(x[:-1].decode())

Now i can get the normal output。
If you don't use vllm worker, modify fastchat/model/model_chatglm.py:

@torch.inference_mode()
def generate_stream_chatglm(
    model,
    tokenizer,
    params,
    device,
    context_len=2048,
    stream_interval=2,
    judge_sent_end=False,
):
    prompt = params["prompt"]
    temperature = float(params.get("temperature", 1.0))
    repetition_penalty = float(params.get("repetition_penalty", 1.0))
    top_p = float(params.get("top_p", 1.0))
    max_new_tokens = int(params.get("max_new_tokens", 256))
    echo = params.get("echo", True)

    # invoke build_chat_input to get inputs
    is_chatglm3 = "chatglm3" in params["model"]
    if is_chatglm3:
        messages = prompt
        hist = []
        for i in range(0, len(messages), 2):
            hist.append({"role": "user", "content": messages[i][1]})
            hist.append({"role": "assistant", "content": messages[i + 1][1]})
        query = messages[-2][1]
        inputs = tokenizer.build_chat_input(query, history=hist, role="user").to(model.device)
    else:
        inputs = tokenizer([prompt], return_tensors="pt").to(model.device)
    input_echo_len = len(inputs["input_ids"][0])

    gen_kwargs = {
        "max_length": max_new_tokens + input_echo_len,
        "do_sample": True if temperature > 1e-5 else False,
        "top_p": top_p,
        "repetition_penalty": repetition_penalty,
        "logits_processor": [invalid_score_processor],
    }
    if temperature > 1e-5:
        gen_kwargs["temperature"] = temperature

    total_len = 0
    for total_ids in model.stream_generate(**inputs, **gen_kwargs):
        total_ids = total_ids.tolist()[0]
        total_len = len(total_ids)
        if echo:
            output_ids = total_ids
        else:
            output_ids = total_ids[input_echo_len:]
        response = tokenizer.decode(output_ids)
        response = process_response(response)

        yield {
            "text": response,
            "usage": {
                "prompt_tokens": input_echo_len,
                "completion_tokens": total_len - input_echo_len,
                "total_tokens": total_len,
            },
            "finish_reason": None,
        }

After these modifications, i can get correct result of chatglm3-6b-32k using vllm worker and normal model worker.
references:
https://huggingface.co/THUDM/chatglm3-6b-32k/blob/main/modeling_chatglm.py
https://huggingface.co/THUDM/chatglm3-6b-32k/blob/main/tokenization_chatglm.py

renning22 added a commit to shaleprotocol/Shale-Serve-API that referenced this pull request Feb 24, 2024
* Remove hardcode flash-attn disable setting (lm-sys#2342)

* Document turning off proxy_buffering when api is streaming (lm-sys#2337)

* Simplify huggingface api example (lm-sys#2355)

* Update sponsor logos (lm-sys#2367)

* if LOGDIR is empty, then don't try output log to local file (lm-sys#2357)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* add best_of and use_beam_search for completions interface (lm-sys#2348)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* Extract upvote/downvote from log files (lm-sys#2369)

* Revert "add best_of and use_beam_search for completions interface" (lm-sys#2370)

* Improve doc (lm-sys#2371)

* add best_of and use_beam_search for completions interface (lm-sys#2372)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* update monkey patch for llama2 (lm-sys#2379)

* Make E5 adapter more restrict to reduce mismatch (lm-sys#2381)

* Update UI and sponsers (lm-sys#2387)

* Use fsdp api for save save (lm-sys#2390)

* Release v0.2.27

* Spicyboros + airoboros 2.2 template update. (lm-sys#2392)

Co-authored-by: Jon Durbin <[email protected]>

* bugfix of openai_api_server for fastchat.serve.vllm_worker (lm-sys#2398)

Co-authored-by: wuyongyu <[email protected]>

* Revert "bugfix of openai_api_server for fastchat.serve.vllm_worker" (lm-sys#2400)

* Revert "add best_of and use_beam_search for completions interface" (lm-sys#2401)

* Release a v0.2.28 with bug fixes and more test cases

* Fix model_worker error (lm-sys#2404)

* Added google/flan models and fixed AutoModelForSeq2SeqLM when loading T5 compression model (lm-sys#2402)

* Rename twitter to X (lm-sys#2406)

* Update huggingface_api.py (lm-sys#2409)

* Add support for baichuan2 models (lm-sys#2408)

* Fixed character overlap issue when api streaming output (lm-sys#2431)

* Support custom conversation template in multi_model_worker (lm-sys#2434)

* Add Ascend NPU support (lm-sys#2422)

* Add raw conversation template (lm-sys#2417) (lm-sys#2418)

* Improve docs & UI (lm-sys#2436)

* Fix Salesforce xgen inference (lm-sys#2350)

* Add support for Phind-CodeLlama models (lm-sys#2415) (lm-sys#2416)

Co-authored-by: Lianmin Zheng <[email protected]>

* Add falcon 180B chat conversation template (lm-sys#2384)

* Improve docs (lm-sys#2438)

* add dtype and seed (lm-sys#2430)

* Data cleaning scripts for dataset release (lm-sys#2440)

* merge google/flan based adapters: T5Adapter, CodeT5pAdapter, FlanAdapter (lm-sys#2411)

* Fix docs

* Update UI (lm-sys#2446)

* Add Optional SSL Support to controller.py (lm-sys#2448)

* Format & Improve docs

* Release v0.2.29 (lm-sys#2450)

* Show terms of use as an JS alert (lm-sys#2461)

* vllm worker awq quantization update (lm-sys#2463)

Co-authored-by: 董晓龙 <[email protected]>

* Fix falcon chat template (lm-sys#2464)

* Fix chunk handling when partial chunks are returned (lm-sys#2485)

* Update openai_api_server.py to add an SSL option (lm-sys#2484)

* Update vllm_worker.py (lm-sys#2482)

* fix typo quantization (lm-sys#2469)

* fix vllm quanziation args

* Update README.md (lm-sys#2492)

* Huggingface api worker (lm-sys#2456)

* Update links to lmsys-chat-1m (lm-sys#2497)

* Update train code to support the new tokenizer (lm-sys#2498)

* Third Party UI Example (lm-sys#2499)

* Add metharme (pygmalion) conversation template (lm-sys#2500)

* Optimize for proper flash attn causal handling (lm-sys#2503)

* Add Mistral AI instruction template (lm-sys#2483)

* Update monitor & plots (lm-sys#2506)

* Release v0.2.30 (lm-sys#2507)

* Fix for single turn dataset (lm-sys#2509)

* replace os.getenv with os.path.expanduser because the first one doesn… (lm-sys#2515)

Co-authored-by: khalil <[email protected]>

* Fix arena (lm-sys#2522)

* Update Dockerfile (lm-sys#2524)

* add Llama2ChangAdapter (lm-sys#2510)

* Add ExllamaV2 Inference Framework Support. (lm-sys#2455)

* Improve docs (lm-sys#2534)

* Fix warnings for new gradio versions (lm-sys#2538)

* revert the gradio change; now works for 3.40

* Improve chat templates (lm-sys#2539)

* Add Zephyr 7B Alpha (lm-sys#2535)

* Improve Support for Mistral-Instruct (lm-sys#2547)

* correct max_tokens by context_length instead of raise exception (lm-sys#2544)

* Revert "Improve Support for Mistral-Instruct" (lm-sys#2552)

* Fix Mistral template (lm-sys#2529)

* Add additional Informations from the vllm worker (lm-sys#2550)

* Make FastChat work with LMSYS-Chat-1M Code (lm-sys#2551)

* Create `tags` attribute to fix `MarkupError` in rich CLI (lm-sys#2553)

* move BaseModelWorker outside serve.model_worker to make it independent (lm-sys#2531)

* Misc style and bug fixes (lm-sys#2559)

* Fix README.md (lm-sys#2561)

* release v0.2.31 (lm-sys#2563)

* resolves lm-sys#2542 modify dockerfile to upgrade cuda to 12.2.0 and pydantic 1.10.13 (lm-sys#2565)

* Add airoboros_v3 chat template (llama-2 format) (lm-sys#2564)

* Add Xwin-LM V0.1, V0.2 support (lm-sys#2566)

* Fixed model_worker generate_gate may blocked main thread (lm-sys#2540) (lm-sys#2562)

* feat: add claude-v2 (lm-sys#2571)

* Update vigogne template (lm-sys#2580)

* Fix issue lm-sys#2568: --device mps led to TypeError: forward() got an unexpected keyword argument 'padding_mask'. (lm-sys#2579)

* Add Mistral-7B-OpenOrca conversation_temmplate (lm-sys#2585)

* docs: bit misspell comments model adapter default template name conversation (lm-sys#2594)

* Update Mistral template (lm-sys#2581)

* Fix <s> in mistral template

* Update README.md  (vicuna-v1.3 -> vicuna-1.5) (lm-sys#2592)

* Update README.md to highlight chatbot arena (lm-sys#2596)

* Add Lemur model (lm-sys#2584)

Co-authored-by: Roberto Ugolotti <[email protected]>

* add trust_remote_code=True in BaseModelAdapter (lm-sys#2583)

* Openai interface add use beam search and best of 2 (lm-sys#2442)

Signed-off-by: Lei Wen <[email protected]>
Co-authored-by: Lei Wen <[email protected]>

* Update qwen and add pygmalion (lm-sys#2607)

* feat: Support model AquilaChat2 (lm-sys#2616)

* Added settings vllm (lm-sys#2599)

Co-authored-by: bodza <[email protected]>
Co-authored-by: bodza <[email protected]>

* [Logprobs] Support logprobs=1 (lm-sys#2612)

* release v0.2.32

* fix: Fix for OpenOrcaAdapter to return correct conversation template (lm-sys#2613)

* Make fastchat.serve.model_worker to take debug argument (lm-sys#2628)

Co-authored-by: hi-jin <[email protected]>

* openchat 3.5 model support (lm-sys#2638)

* xFastTransformer framework support (lm-sys#2615)

* feat: support custom models vllm serving (lm-sys#2635)

* kill only fastchat process (lm-sys#2641)

* Update server_arch.png

* Use conv.update_last_message api in mt-bench answer generation (lm-sys#2647)

* Improve Azure OpenAI interface (lm-sys#2651)

* Add required_temp support in jsonl format to support flexible temperature setting for gen_api_answer (lm-sys#2653)

* Pin openai version < 1 (lm-sys#2658)

* Remove exclude_unset parameter (lm-sys#2654)

* Revert "Remove exclude_unset parameter" (lm-sys#2666)

* added support for CodeGeex(2) (lm-sys#2645)

* add chatglm3 conv template support in conversation.py (lm-sys#2622)

* UI and model change (lm-sys#2672)

Co-authored-by: Lianmin Zheng <[email protected]>

* train_flant5: fix typo (lm-sys#2673)

* Fix gpt template (lm-sys#2674)

* Update README.md (lm-sys#2679)

* feat: support template's stop_str as list (lm-sys#2678)

* Update exllama_v2.md (lm-sys#2680)

* save model under deepspeed (lm-sys#2689)

* Adding SSL support for model workers and huggingface worker (lm-sys#2687)

* Check the max_new_tokens <= 0 in openai api server (lm-sys#2688)

* Add Microsoft/Orca-2-7b and update model support docs (lm-sys#2714)

* fix tokenizer of chatglm2 (lm-sys#2711)

* Template for using Deepseek code models (lm-sys#2705)

* add support for Chinese-LLaMA-Alpaca (lm-sys#2700)

* Make --load-8bit flag work with weights in safetensors format (lm-sys#2698)

* Format code and minor bug fix (lm-sys#2716)

* Bump version to v0.2.33 (lm-sys#2717)

* fix tokenizer.pad_token attribute error (lm-sys#2710)

* support stable-vicuna model (lm-sys#2696)

* Exllama cache 8bit (lm-sys#2719)

* Add Yi support (lm-sys#2723)

* Add Hermes 2.5 [fixed] (lm-sys#2725)

* Fix Hermes2Adapter (lm-sys#2727)

* Fix YiAdapter (lm-sys#2730)

* add trust_remote_code argument (lm-sys#2715)

* Add revision arg to MT Bench answer generation (lm-sys#2728)

* Fix MPS backend 'index out of range' error (lm-sys#2737)

* add starling support (lm-sys#2738)

* Add deepseek chat (lm-sys#2760)

* a convenient script for spinning up the API with Model Workers (lm-sys#2790)

* Prevent returning partial stop string in vllm worker (lm-sys#2780)

* Update UI and new models (lm-sys#2762)

* Support MetaMath (lm-sys#2748)

* Use common logging code in the OpenAI API server (lm-sys#2758)

Co-authored-by: Warren Francis <[email protected]>

* Show how to turn on experiment tracking for fine-tuning (lm-sys#2742)

Co-authored-by: Morgan McGuire <[email protected]>

* Support xDAN-L1-Chat Model  (lm-sys#2732)

* Format code

* Update the version to 0.2.34 (lm-sys#2793)

* add dolphin (lm-sys#2794)

* Fix tiny typo (lm-sys#2805)

* Add instructions for evaluating on MT bench using vLLM (lm-sys#2770)

* Update README.md

* Add SOLAR-10.7b Instruct Model (lm-sys#2826)

* Update README.md (lm-sys#2852)

* fix: 'compeletion' typo (lm-sys#2847)

* Add Tunnelmole as an open source alternative to ngrok and include usage instructions (lm-sys#2846)

* update readme

* update mt-bench readme

* Add support for CatPPT (lm-sys#2840)

* Add functionality to ping AI2 InferD endpoints for tulu 2 (lm-sys#2832)

Co-authored-by: Sam Skjonsberg <[email protected]>

* add download models from www.modelscope.cn (lm-sys#2830)

Co-authored-by: mulin.lyh <[email protected]>

* Fix conv_template of chinese alpaca 2 (lm-sys#2812)

* add bagel model adapter (lm-sys#2814)

* add root_path argument to gradio web server. (lm-sys#2807)

Co-authored-by: bertls <[email protected]>

* Import `accelerate` locally to avoid it as a strong dependency (lm-sys#2820)

* Replace dict merge with unpacking for compatibility of 3.8 in vLLM worker (lm-sys#2824)

Signed-off-by: rudeigerc <[email protected]>

* Format code (lm-sys#2854)

* Openai API migrate (lm-sys#2765)

* fix openai api server docs

* Add a16z as a sponser

* Add new models (Perplexity, gemini) & Separate GPT versions (lm-sys#2856)

Co-authored-by: Wei-Lin Chiang <[email protected]>

* Clean error messages (lm-sys#2857)

* Update docs (lm-sys#2858)

* Modify doc description (lm-sys#2859)

* Fix the problem of not using the decoding method corresponding to the base model in peft mode (lm-sys#2865)

* update a new sota model on MT-Bench which touch an 8.8 scores. (lm-sys#2864)

* NPU needs to be initialized when starting a new process (lm-sys#2843)

* Fix the problem with "vllm + chatglm3" (lm-sys#2845) (lm-sys#2876)

Co-authored-by: 姚峰 <[email protected]>

* Update token spacing for mistral conversation.py (lm-sys#2872)

* check if hm in models before deleting to avoid errors (lm-sys#2870)

Co-authored-by: Your Name <[email protected]>

* Add TinyLlama (lm-sys#2889)

* Fix bug that model doesn't automatically switch peft adapter (lm-sys#2884)

* Update web server commands (lm-sys#2869)

* fix the tokenize process and prompt template of chatglm3 (lm-sys#2883)

Co-authored-by: 章焕锭 <[email protected]>

* Add `Notus` support (lm-sys#2813)

Co-authored-by: alvarobartt <[email protected]>

* feat: support anthropic api with api_dict (lm-sys#2879)

* Update model_adapter.py (lm-sys#2895)

* leaderboard code update (lm-sys#2867)

* fix: change order of SEQUENCE_LENGTH_KEYS (lm-sys#2925)

* fix baichuan:apply_prompt_template call args error (lm-sys#2921)

Co-authored-by: Zheng Hao <[email protected]>

* Fix a typo in openai_api_server.py (lm-sys#2905)

* feat: use variables OPENAI_MODEL_LIST (lm-sys#2907)

* Add TenyxChat-7B-v1 model (lm-sys#2901)

Co-authored-by: sarath@L3 <[omitted]>

* add support for iei yuan2.0 (https://huggingface.co/IEITYuan) (lm-sys#2919)

* nous-hermes-2-mixtral-dpo (lm-sys#2922)

* Bump the version to 0.2.35 (lm-sys#2927)

* fix specify local path issue use model from www.modelscope.cn (lm-sys#2934)

Co-authored-by: mulin.lyh <[email protected]>

* support openai embedding for topic clustering (lm-sys#2729)

* Remove duplicate API endpoint (lm-sys#2949)

* Update Hermes Mixtral (lm-sys#2938)

* Enablement of REST API Usage within Google Colab Free Tier (lm-sys#2940)

* Create a new worker implementation for Apple MLX (lm-sys#2937)

* feat: support Model Yuan2.0, a new generation Fundamental Large Language Model developed by IEIT System (lm-sys#2936)

* Fix the pooling method of BGE embedding model (lm-sys#2926)

* format code

* SGLang Worker (lm-sys#2928)

* Fix sglang worker (lm-sys#2953)

* Update mlx_worker to be async (lm-sys#2958)

* Integrate LightLLM into serve worker (lm-sys#2888)

* Copy button (lm-sys#2963)

* feat: train with template (lm-sys#2951)

* fix content maybe a str (lm-sys#2968)

* Adding download folder information in README (lm-sys#2972)

* use cl100k_base as the default tiktoken encoding (lm-sys#2974)

Signed-off-by: bjwswang <[email protected]>

* Update README.md (lm-sys#2975)

* Fix tokenizer for vllm worker (lm-sys#2984)

* update yuan2.0 generation (lm-sys#2989)

* fix: tokenization mismatch when training with different templates (lm-sys#2996)

* fix: inconsistent tokenization by llama tokenizer (lm-sys#3006)

* Fix type hint for play_a_match_single (lm-sys#3008)

* code update (lm-sys#2997)

* Update model_support.md (lm-sys#3016)

* Update lightllm_integration.md (lm-sys#3014)

* Upgrade gradio to 4.17 (lm-sys#3027)

* Update MLX integration to use new generate_step function signature (lm-sys#3021)

* Update readme (lm-sys#3028)

* Update gradio version in `pyproject.toml` and fix a bug (lm-sys#3029)

* Update gradio demo and API model providers (lm-sys#3030)

* Gradio Web Server for Multimodal Models (lm-sys#2960)

Co-authored-by: Lianmin Zheng <[email protected]>

* Migrate the gradio server to openai v1 (lm-sys#3032)

* Update version to 0.2.36 (lm-sys#3033)

Co-authored-by: Wei-Lin Chiang <[email protected]>

* Add llava 34b template (lm-sys#3034)

* Update model support  (lm-sys#3040)

* Add psutil to pyproject.toml dependencies (lm-sys#3039)

* Fix SGLang worker (lm-sys#3045)

* Random VQA Sample button for VLM direct chat (lm-sys#3041)

* Update arena.md to fix link (lm-sys#3051)

* multi inference

---------

Signed-off-by: Lei Wen <[email protected]>
Signed-off-by: rudeigerc <[email protected]>
Signed-off-by: bjwswang <[email protected]>
Co-authored-by: Trangle <[email protected]>
Co-authored-by: Nathan Stitt <[email protected]>
Co-authored-by: Lianmin Zheng <[email protected]>
Co-authored-by: leiwen83 <[email protected]>
Co-authored-by: Lei Wen <[email protected]>
Co-authored-by: Jon Durbin <[email protected]>
Co-authored-by: Jon Durbin <[email protected]>
Co-authored-by: Rayrtfr <[email protected]>
Co-authored-by: wuyongyu <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: Jeff (Zhen) Wang <[email protected]>
Co-authored-by: karshPrime <[email protected]>
Co-authored-by: obitolyz <[email protected]>
Co-authored-by: Shangwei Chen <[email protected]>
Co-authored-by: HyungJin Ahn <[email protected]>
Co-authored-by: zhangsibo1129 <[email protected]>
Co-authored-by: Tobias Birchler <[email protected]>
Co-authored-by: Jae-Won Chung <[email protected]>
Co-authored-by: Mingdao Liu <[email protected]>
Co-authored-by: Ying Sheng <[email protected]>
Co-authored-by: Brandon Biggs <[email protected]>
Co-authored-by: dongxiaolong <[email protected]>
Co-authored-by: 董晓龙 <[email protected]>
Co-authored-by: Siddartha Naidu <[email protected]>
Co-authored-by: shuishu <[email protected]>
Co-authored-by: Andrew Aikawa <[email protected]>
Co-authored-by: Liangsheng Yin <[email protected]>
Co-authored-by: enochlev <[email protected]>
Co-authored-by: AlpinDale <[email protected]>
Co-authored-by: Lé <[email protected]>
Co-authored-by: Toshiki Kataoka <[email protected]>
Co-authored-by: khalil <[email protected]>
Co-authored-by: khalil <[email protected]>
Co-authored-by: dubaoquan404 <[email protected]>
Co-authored-by: Chang W. Lee <[email protected]>
Co-authored-by: theScotchGame <[email protected]>
Co-authored-by: lewtun <[email protected]>
Co-authored-by: Stephen Horvath <[email protected]>
Co-authored-by: liunux4odoo <[email protected]>
Co-authored-by: Norman Mu <[email protected]>
Co-authored-by: Sebastian Bodza <[email protected]>
Co-authored-by: Tianle (Tim) Li <[email protected]>
Co-authored-by: Wei-Lin Chiang <[email protected]>
Co-authored-by: Alex <[email protected]>
Co-authored-by: Jingcheng Hu <[email protected]>
Co-authored-by: lvxuan <[email protected]>
Co-authored-by: cOng <[email protected]>
Co-authored-by: bofeng huang <[email protected]>
Co-authored-by: Phil-U-U <[email protected]>
Co-authored-by: Wayne Spangenberg <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Rohan Gupta <[email protected]>
Co-authored-by: ugolotti <[email protected]>
Co-authored-by: Roberto Ugolotti <[email protected]>
Co-authored-by: edisonwd <[email protected]>
Co-authored-by: FangYin Cheng <[email protected]>
Co-authored-by: bodza <[email protected]>
Co-authored-by: bodza <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Srinath Janakiraman <[email protected]>
Co-authored-by: Jaeheon Jeong <[email protected]>
Co-authored-by: One <[email protected]>
Co-authored-by: [email protected] <[email protected]>
Co-authored-by: David <[email protected]>
Co-authored-by: Witold Wasiczko <[email protected]>
Co-authored-by: Peter Willemsen <[email protected]>
Co-authored-by: ZeyuTeng96 <[email protected]>
Co-authored-by: Forceless <[email protected]>
Co-authored-by: Jeff <[email protected]>
Co-authored-by: MrZhengXin <[email protected]>
Co-authored-by: Long Nguyen <[email protected]>
Co-authored-by: Elsa Granger <[email protected]>
Co-authored-by: Christopher Chou <[email protected]>
Co-authored-by: wangshuai09 <[email protected]>
Co-authored-by: amaleshvemula <[email protected]>
Co-authored-by: Zollty Tsou <[email protected]>
Co-authored-by: xuguodong1999 <[email protected]>
Co-authored-by: Michael J Kaye <[email protected]>
Co-authored-by: 152334H <[email protected]>
Co-authored-by: Jingsong-Yan <[email protected]>
Co-authored-by: Siyuan (Ryans) Zhuang <[email protected]>
Co-authored-by: Chris Kerwell Gresla <[email protected]>
Co-authored-by: pandada8 <[email protected]>
Co-authored-by: Isaac Ong <[email protected]>
Co-authored-by: Warren Francis <[email protected]>
Co-authored-by: Warren Francis <[email protected]>
Co-authored-by: Morgan McGuire <[email protected]>
Co-authored-by: Morgan McGuire <[email protected]>
Co-authored-by: xDAN-AI <[email protected]>
Co-authored-by: Ikko Eltociear Ashimine <[email protected]>
Co-authored-by: Robbie <[email protected]>
Co-authored-by: Rishiraj Acharya <[email protected]>
Co-authored-by: Nathan Lambert <[email protected]>
Co-authored-by: Sam Skjonsberg <[email protected]>
Co-authored-by: liuyhwangyh <[email protected]>
Co-authored-by: mulin.lyh <[email protected]>
Co-authored-by: stephanbertl <[email protected]>
Co-authored-by: bertls <[email protected]>
Co-authored-by: Chirag Jain <[email protected]>
Co-authored-by: Yuchen Cheng <[email protected]>
Co-authored-by: Shuo Yang <[email protected]>
Co-authored-by: Wei-Lin Chiang <[email protected]>
Co-authored-by: JQ <[email protected]>
Co-authored-by: yaofeng <[email protected]>
Co-authored-by: 姚峰 <[email protected]>
Co-authored-by: Michael <[email protected]>
Co-authored-by: Josh NE <[email protected]>
Co-authored-by: Your Name <[email protected]>
Co-authored-by: WHDY <[email protected]>
Co-authored-by: 章焕锭 <[email protected]>
Co-authored-by: Gabriel Martín Blázquez <[email protected]>
Co-authored-by: alvarobartt <[email protected]>
Co-authored-by: Zheng Hao <[email protected]>
Co-authored-by: Ren Xuancheng <[email protected]>
Co-authored-by: Sarath Shekkizhar <[email protected]>
Co-authored-by: wangpengfei1013 <[email protected]>
Co-authored-by: Alexandre Strube <[email protected]>
Co-authored-by: Teknium <[email protected]>
Co-authored-by: Cristian Gutiérrez <[email protected]>
Co-authored-by: ali asaria <[email protected]>
Co-authored-by: wulixuan <[email protected]>
Co-authored-by: staoxiao <[email protected]>
Co-authored-by: Zaida Zhou <[email protected]>
Co-authored-by: dheeraj-326 <[email protected]>
Co-authored-by: bjwswang <[email protected]>
Co-authored-by: Zhanghao Wu <[email protected]>
Co-authored-by: Ted Li <[email protected]>
Co-authored-by: Shukant Pal <[email protected]>
Co-authored-by: Lisa Dunlap <[email protected]>
Co-authored-by: Logan Kilpatrick <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.