Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chatllm: fix loading of chats after #2676 #2693

Merged
merged 2 commits into from
Jul 19, 2024
Merged

Conversation

cebtenzzre
Copy link
Member

@cebtenzzre cebtenzzre commented Jul 18, 2024

After #2676, attempting to load old chats crashes GPT4All. This is because the values of LLModelType, which are serialized to disk, have changed. The new builds of GPT4All misinterpret the old values of LLModelType and read bad data (Llama is loaded as ChatAPI).

This PR adds back the old enumeration values and tries to make it as clear as possible that they are not arbitrary and should not be changed.

Serialization for the GPT-J model type is also restored so GPT4All does not hit Q_UNREACHABLE on exit if there is an existing chat with one of these models.

This fixes a regression in commit ca72428 ("Remove support for GPT-J
models. (#2676)").

Signed-off-by: Jared Van Bortel <[email protected]>
@cebtenzzre cebtenzzre requested a review from manyoso July 18, 2024 21:26
@cebtenzzre cebtenzzre marked this pull request as draft July 18, 2024 21:34
Otherwise, we will hit Q_UNREACHABLE and crash.

Signed-off-by: Jared Van Bortel <[email protected]>
@cebtenzzre cebtenzzre changed the title chatllm: restore original values of LLModelType chatllm: fix loading of chats after #2676 Jul 18, 2024
@cebtenzzre cebtenzzre marked this pull request as ready for review July 18, 2024 21:40
@manyoso manyoso merged commit 56d5a23 into main Jul 19, 2024
6 of 12 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants