Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vertex AI client #510

Merged
merged 6 commits into from
Sep 20, 2024
Merged

Vertex AI client #510

merged 6 commits into from
Sep 20, 2024

Conversation

whiterabbit1983
Copy link
Contributor

@whiterabbit1983 whiterabbit1983 commented Sep 19, 2024

Important

Add Vertex AI client for text embedding, replace previous client, and optimize imports.

  • Vertex AI Client: Added vertexai.py for text embedding using Vertex AI's text-embedding-004 model. Replaces previous embedding client in embed_docs.py, gather_messages.py, and embed.py.
  • Environment Variables: Added google_project_id and vertex_location to env.py for Google Cloud configurations.
  • Import Optimizations: Removed duplicate UUID imports in multiple router files. Reorganized imports in create_agent.py, create_agent_tool.py, and other router files.
  • Configuration: Updated litellm-config.yaml to include Vertex AI model text-embedding-004.

This description was created by Ellipsis for 6b3ab54. It will automatically update as commits are pushed.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ Changes requested. Reviewed everything up to 87a8f6e in 54 seconds

More details
  • Looked at 763 lines of code in 51 files
  • Skipped 0 files when reviewing.
  • Skipped posting 0 drafted comments based on config settings.

Workflow ID: wflow_o725aWAmjgv1fJzQ


Want Ellipsis to fix these issues? Tag @ellipsis-dev in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

async def embed(
inputs: list[str], dimensions: int = 1024, join_inputs: bool = True
) -> list[list[float]]:
input = ["\n\n".join(inputs)] if join_inputs else inputs
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Avoid using input as a variable name since it shadows the built-in function input. Consider renaming it to something more descriptive, like input_data.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

❌ Changes requested. Incremental review on 44e66d0 in 44 seconds

More details
  • Looked at 441 lines of code in 14 files
  • Skipped 0 files when reviewing.
  • Skipped posting 3 drafted comments based on config settings.
1. agents-api/agents_api/clients/litellm.py:27
  • Draft comment:
    Clarify the purpose of prefixing the model name with openai/ and ensure it is necessary. Consider handling this logic more explicitly if it is required for specific cases.
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The acompletion function in litellm.py has a similar issue with the model variable. The comment # FIXME: This is for litellm suggests that the model name is being prefixed with openai/ for a specific reason, but this should be clarified or handled more explicitly to avoid confusion.
2. agents-api/agents_api/models/chat/gather_messages.py:64
  • Draft comment:
    Ensure that the response from litellm.aembedding contains the expected number of elements to avoid potential errors. Consider adding error handling or validation.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable:
    The comment is speculative and suggests adding error handling without evidence of an issue. The code change involves a function call and unpacking, which is a common pattern. Without evidence of an issue, the comment does not meet the criteria for keeping.
    I might be overlooking potential issues with the litellm.aembedding function that are not evident from the diff. However, without specific evidence, the comment remains speculative.
    The comment does not provide specific evidence of an issue, and the code pattern is common. Therefore, the comment should be removed unless there is clear evidence of a problem.
    Remove the comment as it is speculative and lacks strong evidence of an issue.
3. agents-api/agents_api/env.py:61
  • Draft comment:
    The embedding_service_base variable is removed but still referenced elsewhere. Ensure that all references to this variable are updated or removed to avoid inconsistencies.
  • Reason this comment was not posted:
    Decided after close inspection that this draft comment was likely wrong and/or not actionable:
    The comment seems to be incorrect because 'embedding_service_base' is not referenced anywhere in the new file content. The removal appears to be complete and consistent.
    I might be missing some context from other files or parts of the codebase, but based on the provided content, the comment seems incorrect.
    The task is to review this file only, and based on this file, the comment is not valid.
    Delete the comment as it is incorrect; 'embedding_service_base' is not referenced in the new file content.

Workflow ID: wflow_EqHMQESwXDb908jq


Want Ellipsis to fix these issues? Tag @ellipsis-dev in a comment. You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

model = f"openai/{model}" # FIXME: This is for litellm

if isinstance(inputs, str):
input = [inputs]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The variable input should be renamed to inputs to avoid confusion and to match the function parameter name.

Signed-off-by: Diwank Singh Tomer <[email protected]>
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Incremental review on d0f57a6 in 28 seconds

More details
  • Looked at 144 lines of code in 6 files
  • Skipped 0 files when reviewing.
  • Skipped posting 1 drafted comments based on config settings.
1. llm-proxy/litellm-config.yaml:109
  • Draft comment:
    The vertex_project and vertex_location environment variables are commented out. Ensure this is intentional and not an oversight, as the PR description mentions adding these for Google Cloud configurations.
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The PR description mentions adding environment variables for Google Cloud configurations, but they are commented out in the litellm-config.yaml. This might be intentional, but it's worth pointing out in case it was overlooked.

Workflow ID: wflow_7JMIQgJPN1KOyXHV


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Incremental review on 6b3ab54 in 37 seconds

More details
  • Looked at 105 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 4 drafted comments based on config settings.
1. agents-api/agents_api/clients/litellm.py:4
  • Draft comment:
    Consolidate the repeated import statements for litellm into a single import statement to improve readability.
import litellm
from litellm import acompletion as _acompletion, aembedding as _aembedding, get_supported_openai_params
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The import statement for litellm is repeated multiple times. It can be consolidated into a single import statement to improve readability and maintainability.
2. agents-api/agents_api/clients/litellm.py:65
  • Draft comment:
    Avoid using input as a variable name to prevent shadowing the built-in input function.
        input_data = [inputs]
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The input variable is being overwritten in the aembedding function, which can lead to confusion. It would be better to use a different variable name to avoid shadowing the built-in input function.
3. agents-api/agents_api/clients/litellm.py:36
  • Draft comment:
    Address the FIXME comments or create tasks to ensure they are resolved in the future.
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The FIXME comments indicate areas that need attention or clarification. It's important to address these comments or create tasks to ensure they are not forgotten.
4. agents-api/tests/test_chat_routes.py:29
  • Draft comment:
    Provide explicit parameters for aembedding in the test to avoid confusion and ensure clarity.
    assert (await litellm.aembedding(inputs="test", model="test-model"))[0][0] == 1.0  # pytype: disable=missing-parameter
  • Reason this comment was not posted:
    Confidence changes required: 50%
    The test for aembedding is missing required parameters, which could lead to confusion or errors in the future. It's better to provide explicit parameters for clarity.

Workflow ID: wflow_YifHtNekmqjRrJRV


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

@whiterabbit1983 whiterabbit1983 merged commit e2ac3ce into dev Sep 20, 2024
4 of 7 checks passed
@whiterabbit1983 whiterabbit1983 deleted the f/vertexai-client branch September 20, 2024 06:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants