Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

openai[patch]: enable Azure structured output, parallel_tool_calls=Fa… #26599

Merged
merged 4 commits into from
Sep 23, 2024

Conversation

baskaryan
Copy link
Collaborator

@baskaryan baskaryan commented Sep 17, 2024

…lse, tool_choice=required

response_format=json_schema, tool_choice=required, parallel_tool_calls are all supported for gpt-4o on azure.

@efriis efriis added the partner label Sep 17, 2024
Copy link

vercel bot commented Sep 17, 2024

The latest updates on your projects. Learn more about Vercel for Git ↗︎

1 Skipped Deployment
Name Status Preview Comments Updated (UTC)
langchain ⬜️ Ignored (Inspect) Visit Preview Sep 18, 2024 0:48am

@efriis efriis self-assigned this Sep 17, 2024
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. langchain Related to the langchain package labels Sep 17, 2024
@Skar0
Copy link
Contributor

Skar0 commented Sep 18, 2024

Hi @baskaryan, thank you for addressing this issue and for your work on this! I tested the changes, and they work as expected for the json_schema part 😃

I noticed that in your fix, AzureChatOpenAI._create_chat_result now passes an openai.BaseModel to BaseChatOpenAI._create_chat_result instead of the dict resulting from response.model_dump(), as it did before. While this does resolve the issue for which I proposed a fix in #26086, it requires both BaseChatOpenAI._create_chat_result and AzureChatOpenAI._create_chat_result to transform the response from an openai.BaseModel into a dict.

In #26086, I proposed modifying BaseChatOpenAI._create_chat_result to consistently work with the dict version of the response, which is always created anyways. This change eliminates the need to convert the response into a dict multiple times when calling AzureChatOpenAI._create_chat_result and I believe it simplifies the logic and complexity within these methods. I believe that this would complement your changes and improve code clarity. Of course, this is assuming that my proposed changes are correct and align with your goals for the implementation of that method.

Thanks again for your work and for considering my contribution! 😉

@@ -9,27 +9,39 @@

from langchain_openai import AzureChatOpenAI

OPENAI_API_VERSION = os.environ.get("AZURE_OPENAI_API_VERSION", "")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran OpenAI integration tests on this branch and got failures related to env vars: https://github.com/langchain-ai/langchain/actions/runs/10924401318/job/30323353395

Also getting some failures here in json mode on master, suspect unrelated. Unable to reproduce so far.

FAILED tests/integration_tests/chat_models/test_azure.py::test_json_mode_async - openai.LengthFinishReasonError: Could not parse response content as the length limit was reached - CompletionUsage(completion_tokens=10, prompt_tokens=18, total_tokens=28, completion_tokens_details=None)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

huh thought id set that env var, will double check

@baskaryan
Copy link
Collaborator Author

Hi @baskaryan, thank you for addressing this issue and for your work on this! I tested the changes, and they work as expected for the json_schema part 😃

I noticed that in your fix, AzureChatOpenAI._create_chat_result now passes an openai.BaseModel to BaseChatOpenAI._create_chat_result instead of the dict resulting from response.model_dump(), as it did before. While this does resolve the issue for which I proposed a fix in #26086, it requires both BaseChatOpenAI._create_chat_result and AzureChatOpenAI._create_chat_result to transform the response from an openai.BaseModel into a dict.

In #26086, I proposed modifying BaseChatOpenAI._create_chat_result to consistently work with the dict version of the response, which is always created anyways. This change eliminates the need to convert the response into a dict multiple times when calling AzureChatOpenAI._create_chat_result and I believe it simplifies the logic and complexity within these methods. I believe that this would complement your changes and improve code clarity. Of course, this is assuming that my proposed changes are correct and align with your goals for the implementation of that method.

Thanks again for your work and for considering my contribution! 😉

the issue is that we want to extract response.choices[0].message.parsed before response is converted to a dict, as parsed can itself be a pydantic object and we don't want it converted to a dict

@ccurme ccurme removed the langchain Related to the langchain package label Sep 20, 2024
@baskaryan baskaryan merged commit e1e4f88 into master Sep 23, 2024
28 of 29 checks passed
@baskaryan baskaryan deleted the bagatur/enable_azure_feats branch September 23, 2024 05:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
partner size:L This PR changes 100-499 lines, ignoring generated files.
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants