Falcon 40B. Is any way to run with LocalAI? #1368
-
Is there any way to run Falcon 40B model with LocalAI?
and this backends:
without any success. My model config.yml file:
Model = wizardlm-uncensored-falcon-40b.ggccv1.q4_0.bin, backend = falcon
Model = wizardlm-uncensored-falcon-40b.ggccv1.q4_0.bin, backend = falcon-ggml
Model = wizardlm-uncensored-falcon-40b.ggccv1.q4_0.bin, backend line commented
Model = falcon-40b-instruct.ggccv1.q4_0.bin, backend = falcon
Model = falcon-40b-instruct.ggccv1.q4_0.bin, backend = falcon-ggml
Model = falcon-40b-instruct.ggccv1.q4_0.bin, without backend in config
LocalAI version:
Environment, CPU architecture, OS, and Version:
Describe the bug
To Reproduce
Expected behavior Logs |
Beta Was this translation helpful? Give feedback.
Replies: 4 comments 1 reply
-
|
Beta Was this translation helpful? Give feedback.
-
@mudler do you have any ideas, what is wrong? |
Beta Was this translation helpful? Give feedback.
-
@netandreus did you tried with falcon/gguf files? GGML are quite outdated and old now. That should be working with the default llama-cpp backend as for now. Also, which version of LocalAI are you trying this with? |
Beta Was this translation helpful? Give feedback.
-
Here is my today's test results:
Not tested: |
Beta Was this translation helpful? Give feedback.
@netandreus did you tried with falcon/gguf files? GGML are quite outdated and old now. That should be working with the default llama-cpp backend as for now. Also, which version of LocalAI are you trying this with?