-
-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
error loading model #2135
Comments
same issue here |
you have old version of ggml model, download new one error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305) |
@strnad which version can we download ?. i a tire to download model and then have the error. |
same problem with "WizardLM-13B-Uncensored-GGML" |
@gandolfi974 @valdesguefa try to download the stable-vicuna-13B.ggml.q5_1.bin |
This issue has been closed due to inactivity for 30 days. If you believe it is still relevant, please leave a comment below. |
Describe the bug
Hi I tried to follow the manual installation steps but I couldn't get the server run. After some online research, i thought the problem might be due to the pytorch installation, so I tried this to install it: pip install -U --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html.
Now the server is able to run as I tested it on some simple LLM models.
Then I tested it on ggml 13b and here is the error:
llama.cpp: loading model from models/eachadea_ggml-vicuna-13b-1.1/ggml-old-vic13b-q4_0.bin
llama_model_load_internal: format = ggjt v1 (pre #1405)
llama_model_load_internal: n_vocab = 32000
llama_model_load_internal: n_ctx = 2048
llama_model_load_internal: n_embd = 5120
llama_model_load_internal: n_mult = 256
llama_model_load_internal: n_head = 40
llama_model_load_internal: n_layer = 40
llama_model_load_internal: n_rot = 128
llama_model_load_internal: ftype = 4 (mostly Q4_1, some F16)
llama_model_load_internal: n_ff = 13824
llama_model_load_internal: n_parts = 1
llama_model_load_internal: model size = 13B
error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)
llama_init_from_file: failed to load model
Exception ignored in: <function LlamaCppModel.del at 0x13c546200>
Traceback (most recent call last):
File "/Users/yingxiao.kong/text-generation-webui/modules/llamacpp_model.py", line 23, in del
self.model.del()
AttributeError: 'LlamaCppModel' object has no attribute 'model'
Is there an existing issue for this?
Reproduction
To reproduce it, follow all the manual installation steps in the instruction but only replace the step for pytorch installation with: pip install -U --pre torch torchvision -f https://download.pytorch.org/whl/nightly/cpu/torch_nightly.html
Screenshot
No response
Logs
System Info
The text was updated successfully, but these errors were encountered: