You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
LocalAI version:
Docker v1.23.0-cublas-cuda11 Environment, CPU architecture, OS, and Version:
Linux hostname 5.10.0-22-amd64 #1 SMP Debian 5.10.178-3 (2023-04-22) x86_64 GNU/Linux
NVIDIA-SMI 470.182.03 Driver Version: 470.182.03 CUDA Version: 11.4 Describe the bug
Unable to build the docker image ( even older version tried until 1.18 )
Here are last few lines of the issue:
cd build && cp -rf CMakeFiles/llama.dir/llama.cpp.o ../llama.cpp/llama.o
cd build && cp -rf examples/CMakeFiles/common.dir/common.cpp.o ../llama.cpp/common.o
cd build && cp -rf examples/CMakeFiles/common.dir/grammar-parser.cpp.o ../llama.cpp/grammar-parser.o
g++ -I./llama.cpp -I. -I./llama.cpp/examples -I./examples -O3 -DNDEBUG -std=c++11 -fPIC -Wall -Wextra -Wpedantic -Wcast-qual -Wno-unused-function -pthread -I./llama.cpp -I./llama.cpp/examples binding.cpp -o binding.o -c
binding.cpp: In function'int llama_predict(void*, void*, char*, bool)':binding.cpp:533:42: warning: cast from type 'const char*' to type 'char*' casts away qualifiers [-Wcast-qual] 533 | if (!tokenCallback(state_pr, (char*)token_str)) { | ^~~~~~~~~~~~~~~~binding.cpp:591:1: warning: label 'end' defined but not used [-Wunused-label] 591 | end: | ^~~binding.cpp: In function 'void llama_binding_free_model(void*)':binding.cpp:613:5: warning: possible problem detected in invocation of 'operator delete' [-Wdelete-incomplete] 613 | delete ctx->model; | ^~~~~~~~~~~~~~~~~binding.cpp:613:17: warning: invalid use of incomplete type 'struct llama_model' 613 | delete ctx->model; | ~~~~~^~~~~In file included from ./llama.cpp/examples/common.h:5, from binding.cpp:1:./llama.cpp/llama.h:66:12: note: forward declaration of 'struct llama_model' 66 | struct llama_model; | ^~~~~~~~~~~binding.cpp:613:5: note: neither the destructor nor the class-specific 'operator delete' will be called, even if they are declared when the class is defined 613 | delete ctx->model; | ^~~~~~~~~~~~~~~~~cd build && cp -rf CMakeFiles/ggml.dir/k_quants.c.o ../llama.cpp/k_quants.oar src libbinding.a llama.cpp/ggml.o llama.cpp/k_quants.o llama.cpp/common.o llama.cpp/grammar-parser.o llama.cpp/llama.o binding.omake[1]: Leaving directory '/build/go-llama'CGO_LDFLAGS="" C_INCLUDE_PATH=/build/go-llama LIBRARY_PATH=/build/go-llama \go build -ldflags "-X "github.com/go-skynet/LocalAI/internal.Version=v1.23.0-15-gd603a9c" -X "github.com/go-skynet/LocalAI/internal.Commit=d603a9cbb5910eb69cd0bb7458ab40dcbdcf88cd"" -tags "stablediffusion tts" -o backend-assets/grpc/llama ./cmd/grpc/llama/# github.com/go-skynet/go-llama.cppbinding.cpp: In function 'void llama_binding_free_model(void*)':binding.cpp:613:5: warning: possible problem detected in invocation of 'operator delete' [-Wdelete-incomplete] 613 | delete ctx->model; | ^~~~~~~~~~~~~~~~~binding.cpp:613:17: warning: invalid use of incomplete type 'struct llama_model' 613 | delete ctx->model; | ~~~~~^~~~~In file included from go-llama/llama.cpp/examples/common.h:5, from binding.cpp:1:go-llama/llama.cpp/llama.h:66:12: note: forward declaration of 'struct llama_model' 66 | struct llama_model; | ^~~~~~~~~~~binding.cpp:613:5: note: neither the destructor nor the class-specific 'operator delete' will be called, even if they are declared when the class is defined 613 | delete ctx->model; | ^~~~~~~~~~~~~~~~~# github.com/go-skynet/LocalAI/pkg/grpc/llm/llamapkg/grpc/llm/llama/llama.go:32:9: undefined: llama.WithRopeFreqBasepkg/grpc/llm/llama/llama.go:33:9: undefined: llama.WithRopeFreqScalepkg/grpc/llm/llama/llama.go:82:24: cannot use opts.Temperature (variable of type float32) as float64 value in argument to llama.SetTemperaturepkg/grpc/llm/llama/llama.go:83:17: cannot use opts.TopP (variable of type float32) as float64 value in argument to llama.SetTopPpkg/grpc/llm/llama/llama.go:88:25: cannot use ropeFreqBase (variable of type float32) as float64 value in argument to llama.SetRopeFreqBasepkg/grpc/llm/llama/llama.go:89:26: cannot use ropeFreqScale (variable of type float32) as float64 value in argument to llama.SetRopeFreqScalepkg/grpc/llm/llama/llama.go:90:32: cannot use opts.NegativePromptScale (variable of type float32) as float64 value in argument to llama.SetNegativePromptScalepkg/grpc/llm/llama/llama.go:112:64: cannot use opts.MirostatETA (variable of type float32) as float64 value in argument to llama.SetMirostatETApkg/grpc/llm/llama/llama.go:116:64: cannot use opts.MirostatTAU (variable of type float32) as float64 value in argument to llama.SetMirostatTAUpkg/grpc/llm/llama/llama.go:126:60: cannot use opts.PresencePenalty (variable of type float32) as float64 value in argument to llama.SetPenaltypkg/grpc/llm/llama/llama.go:126:60: too many errorsmake: *** [Makefile:350: backend-assets/grpc/llama] Error 1ERROR: Service 'api' failed to build: The command '/bin/sh -c ESPEAK_DATA=/build/lib/Linux-$(uname -m)/piper_phonemize/lib/espeak-ng-data make build' returned a non-zero code: 2
To Reproduce
Change image to image: quay.io/go-skynet/local-ai:v1.23.0-cublas-cuda11 then docker-compose up -d
The text was updated successfully, but these errors were encountered:
LocalAI version:
Docker v1.23.0-cublas-cuda11
Environment, CPU architecture, OS, and Version:
Linux hostname 5.10.0-22-amd64 #1 SMP Debian 5.10.178-3 (2023-04-22) x86_64 GNU/Linux
NVIDIA-SMI 470.182.03 Driver Version: 470.182.03 CUDA Version: 11.4
Describe the bug
Unable to build the docker image ( even older version tried until 1.18 )
Here are last few lines of the issue:
To Reproduce
Change image to
image: quay.io/go-skynet/local-ai:v1.23.0-cublas-cuda11
thendocker-compose up -d
The text was updated successfully, but these errors were encountered: