-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ENH] Nomic Text Embed function #2182
Conversation
The latest updates on your projects. Learn more about Vercel for Git ↗︎
|
Reviewer ChecklistPlease leverage this checklist to ensure your code review is thorough before approving Testing, Bugs, Errors, Logs, Documentation
System Compatibility
Quality
|
Note -- Nomic has a python SDK we could also use to do this but it just hits the same API so didn't seem worth adding a dependency just to hit 1 endpoint. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Almost there. We need a tiny bit more error handling and a couple more tests.
embeddings = self._session.post( | ||
self._api_url, | ||
headers=headers, | ||
json={"model": self._model_name, "texts": texts}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the limit to the maximum number of texts that can be sent at once? If there's a limit, let's implement it on the client side so we don't do the round trip to raise an error.
Important: Do not add any loop logic here. If a chunk fails, throw an exception and let users implement their own chunking and subsequent error handling.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nomic responded:
"Yes, you will get rate limited with 429s if you hit it too fast (in practice this shouldn't happen unless you are a bot trying to DOS the API).
You can send as many as you'd like per request, they will get processed in parallel. It's recommended you break it up yourself into several requests or use the Nomic python client because you will see network latency due to the large request/response size if you send dozens of megabytes of text in a single request"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave it as-is for now. We have some ideas that we're trying to develop to make error handling a little bit more consistent across all EFs.
json={"model": self._model_name, "texts": texts}, | ||
).json() | ||
|
||
return cast(Embeddings, embeddings["embeddings"]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add some error checking in case the API returns an error. You can also use resp.raise_for_status()
to raise if the status is not 2xx.
Without error checking we'll raise KeyError as the response may not contain embeddings
key.
chromadb/test/ef/test_nomic_ef.py
Outdated
To learn more about the Nomic API: https://docs.nomic.ai/reference/endpoints/nomic-embed-text | ||
Export the NOMIC_API_KEY and optionally the NOMIC_MODEL environment variables. | ||
""" | ||
if os.environ.get("NOMIC_API_KEY") is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this has been used in the past but let's add an annotation instead - @pytest.mark.skipif("NOMIC_API_KEY" not in os.environ, reason="NOMIC_API_KEY not set, skipping test.")
Decorators are implicit flow control, but this is a common practice for pytest so we might as well use it.
from chromadb.utils.embedding_functions import NomicEmbeddingFunction | ||
|
||
|
||
def test_nomic() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's add couple of negative test cases:
- Empty API key
- Error response (mock server)
- Too many texts
For the mock server:
pip install pytest_httpserver>=1.0.10
import pytest
from pytest_httpserver import HTTPServer
import json
from unittest.mock import patch
with HTTPServer() as httpserver:
# Define the response
httpserver.expect_oneshot_request("/embeddings/text",method="POST").respond_with_data(
json.dumps({"error": "some error message"}), # adjust to fit Nomic API response
status=400,
)
with patch.object(NomicEmbeddingFunction, '_api_url', f"http://{httpserver.host}:{httpserver.port}/embeddings/text",):
nomic_instance = NomicEmbeddingFunction()
with pytest.raises(Exception):
nomic_instance(["test text"])
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added these tests as well as one for missing model name, but did not include the "too many texts" test. Nomic responded saying:
- "You can send as many as you'd like per request, they will get processed in parallel. It's recommended you break it up yourself into several requests or use the Nomic python client because you will see network latency due to the large request/response size if you send dozens of megabytes of text in a single request"
So I am not sure 1) what we want to consider too large 2) best way to check for that 3) if you still want this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also -- should I add pytest_httpserver to the projects requirements_dev file and check it in as part of this PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please add it to the requirements_dev
.
We do have a limitation in Chroma API regarding the maximum number of embeddings, but we generally don't tie that to the embeddings functions batch size. So you can leave it as is for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Just a couple of minor nits, and this should be ready to go.
from chromadb.utils.embedding_functions import NomicEmbeddingFunction | ||
|
||
|
||
def test_nomic() -> None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, please add it to the requirements_dev
.
We do have a limitation in Chroma API regarding the maximum number of embeddings, but we generally don't tie that to the embeddings functions batch size. So you can leave it as is for now.
embeddings = self._session.post( | ||
self._api_url, | ||
headers=headers, | ||
json={"model": self._model_name, "texts": texts}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's leave it as-is for now. We have some ideas that we're trying to develop to make error handling a little bit more consistent across all EFs.
chromadb/test/ef/test_nomic_ef.py
Outdated
) | ||
|
||
|
||
@pytest.mark.skipif( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can remove the decorator here, as this is a negative test. It is supposed to fail regardless of whether we're testing with the Nomic API key or not.
chromadb/test/ef/test_nomic_ef.py
Outdated
) | ||
|
||
|
||
@pytest.mark.skipif( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe also remove this decorator and let the test run, as this is a local mock test.
I think it makes sense to add some sort of activation flags (other than API keys) for EF testing, but this requires further consideration.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thank you @andrewblum.
Linting issue with the |
@andrewblum, the linter runs this which you can also execute locally to check: pre-commit run --all-files trailing-whitespace
pre-commit run --all-files mixed-line-ending
pre-commit run --all-files end-of-file-fixer
pre-commit run --all-files requirements-txt-fixer
pre-commit run --all-files check-xml
pre-commit run --all-files check-merge-conflict
pre-commit run --all-files check-case-conflict
pre-commit run --all-files check-docstring-first
pre-commit run --all-files black
pre-commit run --all-files flake8
pre-commit run --all-files prettier
pre-commit run --all-files check-yaml
|
Thanks for that, those return all green for me locally so it should be good to re-run. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
Our underlying impl has changed and so this PR is not landable as is. That being said - we'd still like to add this functionality and that is now tracked in this issue. |
Description of changes
Summarize the changes made by this PR.
Test plan
How are these changes tested?
Added a test that is similar to the existing test for Ollama.
pytest
for python,yarn test
for js,cargo test
for rustDocumentation Changes
Are all docstrings for user-facing APIs updated if required? Do we need to make documentation changes in the docs repository?