-
Notifications
You must be signed in to change notification settings - Fork 541
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ERROR - failed to load model from ./models/gpt4all-lora-quantized-ggml.bin #96
Comments
What are your system specs? OS? CPU? RAM? Did you download model correctly? I mean maybe its corrupted, try downloading using browser, and then copy it to /models/ folder. |
CPU seems to support AVX2.. Im running this in ubuntu VM, seems to load just fine.
Well i did a git pull right now on VM, and i cant load it anymore aswell
@ParisNeo something is borked again. Maybe pyllamacpp, ... my hate towards python keeps growing... |
You got "bad magic" too. |
its most likely that the model loading script was updated because this UI relies and depends on other packages/repos, so if they get updated it will take some time for main dev of this repo to look through the code and fix it. He on vacation right now. So just hang tight. it will be fixed eventually. |
@Datou Hi, try this model, it works, the original model is messed up, idk why.
|
For me it loaded after i pulled the newest changes from git and redownloaded |
Sorry guyes I have a very slow connection these days and I lost the connection yesterday. It should work now. Please if the problem is solved make sure to close the issue. |
Current Behavior
The default model file (gpt4all-lora-quantized-ggml.bin) already exists. Do you want to replace it? Press B to download it with a browser (faster). [Y,N,B]?N
Skipping download of model file...
Cleaning tmp folder
Virtual environment created and packages installed successfully.
Launching application...
Checking discussions database...
[2023-04-18 10:11:49,423] {model.py:73} INFO - Loading model ...
llama_model_load: loading model from './models/gpt4all-lora-quantized-ggml.bin' - please wait ...
llama_model_load: invalid model file './models/gpt4all-lora-quantized-ggml.bin' (bad magic)
[2023-04-18 10:11:49,424] {model.py:75} ERROR - failed to load model from ./models/gpt4all-lora-quantized-ggml.bin
Steps to Reproduce
run webui.bat
Screenshots
The text was updated successfully, but these errors were encountered: