Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can compile run and load the models but I have some issue with grpc with actually running them. #1701

Open
A-lx-A opened this issue Feb 12, 2024 · 2 comments
Labels
bug Something isn't working unconfirmed

Comments

@A-lx-A
Copy link

A-lx-A commented Feb 12, 2024

./local-ai --models-path models/ --context-size 4000 --threads 4
9:09PM DBG no galleries to load
9:09PM INF Starting LocalAI using 4 threads, with models path: models/
9:09PM INF LocalAI version: v2.7.0-27-g3875e5e (3875e5e)
9:09PM INF Preloading models from models/
9:09PM INF Model name: mistral

┌───────────────────────────────────────────────────┐
│ Fiber v2.50.0 │
http://127.0.0.1:8080
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ............ 73 Processes ........... 1 │
│ Prefork ....... Disabled PID ............. 19782 │
└───────────────────────────────────────────────────┘

Okay so after compiling the project when I try to run it I get this issue :

9:14PM INF Trying to load the model 'misty.gguf' with all the available backends: llama-cpp, llama-ggml, llama, gpt4all, bert-embeddings, rwkv, whisper, stablediffusion, tinydream, piper
9:14PM INF [llama-cpp] Attempting to load
9:14PM INF Loading model 'misty.gguf' with backend llama-cpp
9:14PM ERR Failed starting/connecting to the gRPC service: rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:46401: connect: connection refused"
9:14PM INF [llama-cpp] Fails: grpc service not ready

Now I am stupid so I was like this is what happens when you try to do things manually so I tried downloading the docker image with code-llama.gguf

docker run -ti -p 8080:8080 localai/localai:v2.8.0-ffmpeg-core # Note I have downloaded the model previously
@@@@@
Skipping rebuild
@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also #288
@@@@@
CPU info:
model name : Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb pti ssbd ibrs ibpb stibp tpr_shadow flexpriority ept vpid ept_ad fsgsbase tsc_adjust sgx bmi1 avx2 smep bmi2 erms invpcid mpx rdseed adx smap clflushopt intel_pt xsaveopt xsavec xgetbv1 xsaves dtherm ida arat pln pts hwp hwp_notify hwp_act_window hwp_epp vnmi md_clear flush_l1d arch_capabilities
CPU: AVX found OK
CPU: AVX2 found OK
CPU: no AVX512 found
@@@@@
9:17PM DBG no galleries to load
9:17PM INF Starting LocalAI using 4 threads, with models path: /build/models
9:17PM INF LocalAI version: v2.8.0 (ef1306f)
9:17PM INF Preloading models from /build/models

And I get this

Trying to load the model 'codellama-7b-gguf' with all the available backends: llama-cpp, llama-ggml, llama, gpt4all, bert-embeddings, rwkv, whisper, stablediffusion, tinydream, piper, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/transformers-musicgen/run.sh, /build/backend/python/diffusers/run.sh, /build/backend/python/coqui/run.sh, /build/backend/python/exllama2/run.sh, /build/backend/python/transformers/run.sh, /build/backend/python/vllm/run.sh, /build/backend/python/mamba/run.sh, /build/backend/python/autogptq/run.sh, /build/backend/python/petals/run.sh, /build/backend/python/bark/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/vall-e-x/run.sh, /build/backend/python/exllama/run.sh
9:25PM INF [llama-cpp] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend llama-cpp
9:25PM INF [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc =
9:25PM INF [llama-ggml] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend llama-ggml
9:25PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
9:25PM INF [llama] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend llama
9:25PM INF [llama] Fails: could not load model: rpc error: code = Canceled desc =
9:25PM INF [gpt4all] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend gpt4all
9:25PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
9:25PM INF [bert-embeddings] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend bert-embeddings
9:25PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
9:25PM INF [rwkv] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend rwkv
9:25PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
9:25PM INF [whisper] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend whisper
9:25PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/codellama-7b-gguf: no such file or directory
9:25PM INF [stablediffusion] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend stablediffusion
9:25PM INF [stablediffusion] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/codellama-7b-gguf: no such file or directory
9:25PM INF [tinydream] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend tinydream
9:25PM INF [tinydream] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/tinydream. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [piper] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend piper
9:25PM INF [piper] Fails: could not load model: rpc error: code = Unknown desc = unsupported model type /build/models/codellama-7b-gguf (should end with .onnx)
9:25PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/sentencetransformers/run.sh
9:25PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/transformers-musicgen/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/transformers-musicgen/run.sh
9:25PM INF [/build/backend/python/transformers-musicgen/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/diffusers/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/diffusers/run.sh
9:25PM INF [/build/backend/python/diffusers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/coqui/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/coqui/run.sh
9:25PM INF [/build/backend/python/coqui/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/exllama2/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/exllama2/run.sh
9:25PM INF [/build/backend/python/exllama2/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/transformers/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/transformers/run.sh
9:25PM INF [/build/backend/python/transformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/vllm/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/vllm/run.sh
9:25PM INF [/build/backend/python/vllm/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/mamba/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/mamba/run.sh
9:25PM INF [/build/backend/python/mamba/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/autogptq/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/autogptq/run.sh
9:25PM INF [/build/backend/python/autogptq/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/petals/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/petals/run.sh
9:25PM INF [/build/backend/python/petals/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/bark/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/bark/run.sh
9:25PM INF [/build/backend/python/bark/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/sentencetransformers/run.sh
9:25PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/vall-e-x/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/vall-e-x/run.sh
9:25PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
9:25PM INF [/build/backend/python/exllama/run.sh] Attempting to load
9:25PM INF Loading model 'codellama-7b-gguf' with backend /build/backend/python/exllama/run.sh
9:25PM INF [/build/backend/python/exllama/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS

Sorry if it's a bit too much info without saying anything really but yeah I'm tinkering around with it if anyone has a clue please let me know.

@A-lx-A A-lx-A added bug Something isn't working unconfirmed labels Feb 12, 2024
@Aszazel
Copy link

Aszazel commented Feb 24, 2024

////////====> using docker-compose with Ubuntu

@@@@@
If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true
If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed:
CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF"
see the documentation at: https://localai.io/basics/build/index.html
Note: See also #288
@@@@@
CPU info:
model name : AMD Ryzen 9 3950X 16-Core Processor
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es
CPU: AVX found OK
CPU: AVX2 found OK
CPU: no AVX512 found
@@@@@
6:36PM DBG no galleries to load
6:36PM INF Starting LocalAI using 4 threads, with models path: /build/models
6:36PM INF LocalAI version: v2.8.2-20-gd825821 (d825821)
6:36PM WRN [startup] failed resolving model '/usr/bin/local-ai'
6:36PM INF Preloading models from /build/models

┌───────────────────────────────────────────────────┐
│ Fiber v2.50.0 │
http://127.0.0.1:8080
│ (bound on host 0.0.0.0 and port 8080) │
│ │
│ Handlers ........... 105 Processes ........... 1 │
│ Prefork ....... Disabled PID ................ 14 │
└───────────────────────────────────────────────────┘

6:36PM INF Trying to load the model 'orca' with all the available backends: llama-cpp, llama-ggml, llama, gpt4all, bert-embeddings, rwkv, whisper, stablediffusion, tinydream, piper, /build/backend/python/diffusers/run.sh, /build/backend/python/autogptq/run.sh, /build/backend/python/bark/run.sh, /build/backend/python/transformers-musicgen/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/petals/run.sh, /build/backend/python/vllm/run.sh, /build/backend/python/exllama2/run.sh, /build/backend/python/exllama/run.sh, /build/backend/python/transformers/run.sh, /build/backend/python/mamba/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/vall-e-x/run.sh, /build/backend/python/coqui/run.sh
6:36PM INF [llama-cpp] Attempting to load
6:36PM INF Loading model 'orca' with backend llama-cpp
6:36PM INF [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc =
6:36PM INF [llama-ggml] Attempting to load
6:36PM INF Loading model 'orca' with backend llama-ggml
6:36PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
6:36PM INF [llama] Attempting to load
6:36PM INF Loading model 'orca' with backend llama
6:36PM INF [llama] Fails: could not load model: rpc error: code = Canceled desc =
6:36PM INF [gpt4all] Attempting to load
6:36PM INF Loading model 'orca' with backend gpt4all
6:36PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
6:36PM INF [bert-embeddings] Attempting to load
6:36PM INF Loading model 'orca' with backend bert-embeddings
6:36PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model
6:36PM INF [rwkv] Attempting to load
6:36PM INF Loading model 'orca' with backend rwkv
6:36PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF
6:36PM INF [whisper] Attempting to load
6:36PM INF Loading model 'orca' with backend whisper
6:36PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/orca: no such file or directory
6:36PM INF [stablediffusion] Attempting to load
6:36PM INF Loading model 'orca' with backend stablediffusion
6:36PM INF [stablediffusion] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/orca: no such file or directory
6:36PM INF [tinydream] Attempting to load
6:36PM INF Loading model 'orca' with backend tinydream
6:36PM INF [tinydream] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/tinydream. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:36PM INF [piper] Attempting to load
6:36PM INF Loading model 'orca' with backend piper
6:37PM INF [piper] Fails: could not load model: rpc error: code = Unknown desc = unsupported model type /build/models/orca (should end with .onnx)
6:37PM INF [/build/backend/python/diffusers/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/diffusers/run.sh
6:37PM INF [/build/backend/python/diffusers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/autogptq/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/autogptq/run.sh
6:37PM INF [/build/backend/python/autogptq/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/bark/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/bark/run.sh
6:37PM INF [/build/backend/python/bark/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/transformers-musicgen/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/transformers-musicgen/run.sh
6:37PM INF [/build/backend/python/transformers-musicgen/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/sentencetransformers/run.sh
6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/petals/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/petals/run.sh
6:37PM INF [/build/backend/python/petals/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/vllm/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/vllm/run.sh
6:37PM INF [/build/backend/python/vllm/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/exllama2/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/exllama2/run.sh
6:37PM INF [/build/backend/python/exllama2/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/exllama/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/exllama/run.sh
6:37PM INF [/build/backend/python/exllama/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/transformers/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/transformers/run.sh
6:37PM INF [/build/backend/python/transformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/mamba/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/mamba/run.sh
6:37PM INF [/build/backend/python/mamba/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/sentencetransformers/run.sh
6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/vall-e-x/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/vall-e-x/run.sh
6:37PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS
6:37PM INF [/build/backend/python/coqui/run.sh] Attempting to load
6:37PM INF Loading model 'orca' with backend /build/backend/python/coqui/run.sh
6:37PM INF [/build/backend/python/coqui/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS

@A-lx-A
Copy link
Author

A-lx-A commented Feb 25, 2024

////////====> using docker-compose with Ubuntu

@@@@@ If you are experiencing issues with the pre-compiled builds, try setting REBUILD=true If you are still experiencing issues with the build, try setting CMAKE_ARGS and disable the instructions set as needed: CMAKE_ARGS="-DLLAMA_F16C=OFF -DLLAMA_AVX512=OFF -DLLAMA_AVX2=OFF -DLLAMA_FMA=OFF" see the documentation at: https://localai.io/basics/build/index.html Note: See also #288 @@@@@ CPU info: model name : AMD Ryzen 9 3950X 16-Core Processor flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sev sev_es CPU: AVX found OK CPU: AVX2 found OK CPU: no AVX512 found @@@@@ 6:36PM DBG no galleries to load 6:36PM INF Starting LocalAI using 4 threads, with models path: /build/models 6:36PM INF LocalAI version: v2.8.2-20-gd825821 (d825821) 6:36PM WRN [startup] failed resolving model '/usr/bin/local-ai' 6:36PM INF Preloading models from /build/models

┌───────────────────────────────────────────────────┐ │ Fiber v2.50.0 │ │ http://127.0.0.1:8080 │ │ (bound on host 0.0.0.0 and port 8080) │ │ │ │ Handlers ........... 105 Processes ........... 1 │ │ Prefork ....... Disabled PID ................ 14 │ └───────────────────────────────────────────────────┘

6:36PM INF Trying to load the model 'orca' with all the available backends: llama-cpp, llama-ggml, llama, gpt4all, bert-embeddings, rwkv, whisper, stablediffusion, tinydream, piper, /build/backend/python/diffusers/run.sh, /build/backend/python/autogptq/run.sh, /build/backend/python/bark/run.sh, /build/backend/python/transformers-musicgen/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/petals/run.sh, /build/backend/python/vllm/run.sh, /build/backend/python/exllama2/run.sh, /build/backend/python/exllama/run.sh, /build/backend/python/transformers/run.sh, /build/backend/python/mamba/run.sh, /build/backend/python/sentencetransformers/run.sh, /build/backend/python/vall-e-x/run.sh, /build/backend/python/coqui/run.sh 6:36PM INF [llama-cpp] Attempting to load 6:36PM INF Loading model 'orca' with backend llama-cpp 6:36PM INF [llama-cpp] Fails: could not load model: rpc error: code = Canceled desc = 6:36PM INF [llama-ggml] Attempting to load 6:36PM INF Loading model 'orca' with backend llama-ggml 6:36PM INF [llama-ggml] Fails: could not load model: rpc error: code = Unknown desc = failed loading model 6:36PM INF [llama] Attempting to load 6:36PM INF Loading model 'orca' with backend llama 6:36PM INF [llama] Fails: could not load model: rpc error: code = Canceled desc = 6:36PM INF [gpt4all] Attempting to load 6:36PM INF Loading model 'orca' with backend gpt4all 6:36PM INF [gpt4all] Fails: could not load model: rpc error: code = Unknown desc = failed loading model 6:36PM INF [bert-embeddings] Attempting to load 6:36PM INF Loading model 'orca' with backend bert-embeddings 6:36PM INF [bert-embeddings] Fails: could not load model: rpc error: code = Unknown desc = failed loading model 6:36PM INF [rwkv] Attempting to load 6:36PM INF Loading model 'orca' with backend rwkv 6:36PM INF [rwkv] Fails: could not load model: rpc error: code = Unavailable desc = error reading from server: EOF 6:36PM INF [whisper] Attempting to load 6:36PM INF Loading model 'orca' with backend whisper 6:36PM INF [whisper] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/orca: no such file or directory 6:36PM INF [stablediffusion] Attempting to load 6:36PM INF Loading model 'orca' with backend stablediffusion 6:36PM INF [stablediffusion] Fails: could not load model: rpc error: code = Unknown desc = stat /build/models/orca: no such file or directory 6:36PM INF [tinydream] Attempting to load 6:36PM INF Loading model 'orca' with backend tinydream 6:36PM INF [tinydream] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/tinydream. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:36PM INF [piper] Attempting to load 6:36PM INF Loading model 'orca' with backend piper 6:37PM INF [piper] Fails: could not load model: rpc error: code = Unknown desc = unsupported model type /build/models/orca (should end with .onnx) 6:37PM INF [/build/backend/python/diffusers/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/diffusers/run.sh 6:37PM INF [/build/backend/python/diffusers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/diffusers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/autogptq/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/autogptq/run.sh 6:37PM INF [/build/backend/python/autogptq/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/autogptq/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/bark/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/bark/run.sh 6:37PM INF [/build/backend/python/bark/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/bark/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/transformers-musicgen/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/transformers-musicgen/run.sh 6:37PM INF [/build/backend/python/transformers-musicgen/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers-musicgen/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/sentencetransformers/run.sh 6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/petals/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/petals/run.sh 6:37PM INF [/build/backend/python/petals/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/petals/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/vllm/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/vllm/run.sh 6:37PM INF [/build/backend/python/vllm/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vllm/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/exllama2/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/exllama2/run.sh 6:37PM INF [/build/backend/python/exllama2/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama2/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/exllama/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/exllama/run.sh 6:37PM INF [/build/backend/python/exllama/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/exllama/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/transformers/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/transformers/run.sh 6:37PM INF [/build/backend/python/transformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/transformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/mamba/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/mamba/run.sh 6:37PM INF [/build/backend/python/mamba/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/mamba/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/sentencetransformers/run.sh 6:37PM INF [/build/backend/python/sentencetransformers/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/sentencetransformers/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/vall-e-x/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/vall-e-x/run.sh 6:37PM INF [/build/backend/python/vall-e-x/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/vall-e-x/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS 6:37PM INF [/build/backend/python/coqui/run.sh] Attempting to load 6:37PM INF Loading model 'orca' with backend /build/backend/python/coqui/run.sh 6:37PM INF [/build/backend/python/coqui/run.sh] Fails: grpc process not found: /tmp/localai/backend_data/backend-assets/grpc/build/backend/python/coqui/run.sh. some backends(stablediffusion, tts) require LocalAI compiled with GO_TAGS

I was able to run previous versions but I wasn't able to compile those for some reason. So I used the binaries from the Release page here on github. Docker used to work as well so it's a new bug for sure.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working unconfirmed
Projects
None yet
Development

No branches or pull requests

2 participants