Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mistral models output gibberish #6

Closed
TheLounger opened this issue Dec 5, 2023 · 1 comment
Closed

Mistral models output gibberish #6

TheLounger opened this issue Dec 5, 2023 · 1 comment

Comments

@TheLounger
Copy link

TheLounger commented Dec 5, 2023

Testing on oobabooga webui as being implemented here.
Llama-2 models (13B 2Bit/4Bit) work as expected.

Tested models:

Typical output:
1

@tsengalb99
Copy link
Contributor

I'm pretty sure this is because the webui repo uses the llama tokenizer, and mistral uses a different tokenizer. If you use the mistral tokenizer / AutoTokenizer you should get reasonable output. For example when running interactive_gen.py (our "chat" script) with 4 bit Mistral

Please enter your prompt or 'quit' (without quotes) to quit: Call me Ishmael
Setting `pad_token_id` to `eos_token_id`:2 for open-end generation.

Model Output:  Call me Ishmael.

I’m an avid reader of Moby Dick, a book that I read every year or so. It’s one of my favorite books, and the reason for that is simple: Ishmael is my alter ego.

In fact, I

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants