Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

langchain-chroma/store.py makes LocalAI crash #636

Open
christiancadieux opened this issue Jun 20, 2023 · 2 comments
Open

langchain-chroma/store.py makes LocalAI crash #636

christiancadieux opened this issue Jun 20, 2023 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@christiancadieux
Copy link

christiancadieux commented Jun 20, 2023

LocalAI version:
latest version quay.io/go-skynet/local-ai:latest on jun 19 6pm MT

Environment, CPU architecture, OS, and Version:
Linux rdei-local-ai-5f8fc75c56-z5lfb 5.4.77-flatcar #1 SMP Wed Nov 18 17:29:43 -00 2020 x86_64 GNU/Linux

Describe the bug
localAI log at the crash:

1:54PM DBG Loading model bert-embeddings from bert
11:54PM DBG Model already loaded in memory: bert
11:54PM DBG Loading model bert-embeddings from bert
11:54PM DBG Model already loaded in memory: bert
GGML_ASSERT: /build/go-bert/bert.cpp/ggml/src/ggml.c:6801: ggml_bert_nelements(dst) == ggml_bert_nelements(src0)
SIGABRT: abort
PC=0x7f55b5641ce1 m=5 sigcode=18446744073709551610
signal arrived during cgo execution

goroutine 98 [syscall]:
runtime.cgocall(0x9f7870, 0xc0006be768)
	/usr/local/go/src/runtime/cgocall.go:157 +0x5c fp=0xc0006be740 sp=0xc0006be708 pc=0x47b41c
github.com/go-skynet/go-bert%2ecpp._Cfunc_bert_token_embeddings(0x7f5360002b40, 0x7f5360000d60, 0x7f53600026d0, 0x1, 0xc000ac4000)
	_cgo_gotypes.go:168 +0x4c fp=0xc0006be768 sp=0xc0006be740 pc=0x8f998c
github.com/go-skynet/go-bert%2ecpp.(*Bert).TokenEmbeddings.func1(0x7f5360002b20?, 0x4?, 0x7f5360002b40?, {0xc0006df390?, 0x1, 0x0?}, 0x0?)
	/build/go-bert/gobert.go:70 +0x9d fp=0xc0006be7c8 sp=0xc0006be768 pc=0x8fa37d
github.com/go-skynet/go-bert%2ecpp.(*Bert).TokenEmbeddings(0x0?, {0xc0006df390?, 0x1, 0x1}, {0xc0006be8a8, 0x1, 0x0?})
	/build/go-bert/gobert.go:70 +0x190 fp=0xc0006be850 sp=0xc0006be7c8 pc=0x8fa150
github.com/go-skynet/LocalAI/api.ModelEmbedding.func2()
	/build/api/prediction.go:127 +0x85 fp=0xc0006be8c0 sp=0xc0006be850 pc=0x982ee5

...

CRASH
To Reproduce

called store.py from examples/langchain-chroma

$ python3 store.py 
Created a chunk of size 604, which is longer than the specified 300
Created a chunk of size 494, which is longer than the specified 300
Created a chunk of size 510, which is longer than the specified 300
Created a chunk of size 583, which is longer than the specified 300
Created a chunk of size 517, which is longer than the specified 300
Created a chunk of size 338, which is longer than the specified 300
Created a chunk of size 374, which is longer than the specified 300
Created a chunk of size 412, which is longer than the specified 300
Created a chunk of size 530, which is longer than the specified 300
Created a chunk of size 594, which is longer than the specified 300
Created a chunk of size 314, which is longer than the specified 300
Created a chunk of size 359, which is longer than the specified 300
Created a chunk of size 513, which is longer than the specified 300
Created a chunk of size 523, which is longer than the specified 300
Created a chunk of size 739, which is longer than the specified 300
Created a chunk of size 536, which is longer than the specified 300
Created a chunk of size 537, which is longer than the specified 300
Created a chunk of size 505, which is longer than the specified 300
Created a chunk of size 310, which is longer than the specified 300
Created a chunk of size 491, which is longer than the specified 300
Using embedded DuckDB with persistence: data will be stored in: db
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: ('Connection aborted.', RemoteDisconnected('Remote end closed connection without response')).
Retrying langchain.embeddings.openai.embed_with_retry.<locals>._embed_with_retry in 4.0 seconds as it raised APIConnectionError: Error communicating with OpenAI: HTTPConnectionPool(host='127.0.0.1', port=8080): Max retries exceeded with url: /v1/engines/text-embedding-ada-002/embeddings (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fbd80d2be50>: Failed to establish a new connection: [Errno 111] Connection refused')).
[[ CRASHED CONTAINER ]]

Expected behavior

Logs

Additional context

@christiancadieux christiancadieux added the bug Something isn't working label Jun 20, 2023
@christiancadieux
Copy link
Author

christiancadieux commented Jun 20, 2023

can prevent localAI from crashing by increasing the chunk size

@localai-bot
Copy link
Contributor

⚠️⚠️⚠️⚠️⚠️

Hi! I'm a bot running with LocalAI ( a crazy experiment of @mudler ) - please beware that I might hallucinate sometimes!

but.... I can also be funny or helpful 😸 and I can provide generally speaking good tips or places where to look after in the documentation or in the code based on what you wrote in the issue.

Don't engage in conversation with me,
I don't support (yet) replying!

⚠️⚠️⚠️⚠️⚠️

The issue seems to be related to a crash occurring when running the LocalAI code. The error message indicates that the code is trying to create chunks of size larger than 300, which may be causing the crash. To reproduce the behavior, you can run the store.py script from the examples/langchain-chroma directory and specify the --debug flag to enable debug mode. The logs generated by the code may provide more information about the cause of the crash. Additionally, checking the network connectivity between the LocalAI container and the OpenAI API may also help in diagnosing the issue.

Sources:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants