-
Notifications
You must be signed in to change notification settings - Fork 7.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.__init__ pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings #461
Comments
Did you download and add a model? |
@sandyRS: as stated in the |
yes i have downloaded the models and mentioned the same path in env file but still seeing the issue. My ENV file - can you please help me on how i can resolve thie error ? |
Had the same issue. Moving downloaded models into I see that you have |
@albertas - thanks for the reply i tried the recommended steps and seeing similar error. My ENV File - |
Similar issue, tried with both putting the model in the .\models subfolder and its own folder inside the .\models subdirectory. The only way I can get it to work is by using the originally listed model, which I'd rather not do as I have a 3090. It's most likely a configuration issue in the .env file, but I'm not 100% sure what all to change if you switch to a different model. |
I had encounter with same problem. I used absolute path for the model path, it resolved the issue. |
Thanks. That fixed it for me :-) |
For me I used absolute path in "privateGPT.py" in .env here is my config: PERSIST_DIRECTORY=db |
Changing the path in the privateGPT.py also does not fix the issue . Please help |
Any solution found to this yet? |
same here on mac.. using the absolute path didn't fix the problem |
If you set model path correctly mentioned in .env file and remove the extra argument n_ctx=1000, Its working as expected. |
i am seeing below error when i run the ingest.py. any thoughts on how i can resolve it ? kindly advise
Error -
error loading model: this format is no longer supported (see ggerganov/llama.cpp#1305)
llama_init_from_file: failed to load model
Traceback (most recent call last):
File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 39, in
main()
File "/Users/FBT/Desktop/Projects/privategpt/privateGPT/ingest.py", line 30, in main
llama = LlamaCppEmbeddings(model_path="./models/ggml-model-q4_0.bin")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "pydantic/main.py", line 341, in pydantic.main.BaseModel.init
pydantic.error_wrappers.ValidationError: 1 validation error for LlamaCppEmbeddings
root
Could not load Llama model from path: ./models/ggml-model-q4_0.bin. Received error (type=value_error)
My ENV file -
PERSIST_DIRECTORY=db
MODEL_TYPE=GPT4All
MODEL_PATH=models/ggml-gpt4all-j-v1.3-groovy.bin
EMBEDDINGS_MODEL_NAME=/Users/FBT/Desktop/Projects/privategpt/privateGPT/models/ggml-model-q4_0.bin
MODEL_N_CTX=1000
The text was updated successfully, but these errors were encountered: