-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix azure openai hanging problem #10153
Conversation
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Documentation preview for c7790d1 will be available here when this CircleCI job completes successfully. More info
|
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]>
else: | ||
_logger.warning(f"Request #{self.index} failed with {e!r}") | ||
status_tracker.increment_num_api_errors() | ||
status_tracker.complete_task(success=False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is the root cause of hanging
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great find! :D
Signed-off-by: Serena Ruan <[email protected]>
""" | ||
params passed at inference time should override envs. | ||
""" | ||
return {k: v for k, v in envs.items() if k not in params} if params else envs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will this erase env entries if a value submitted for a similar key in params is set to None? Is that intended?
Would a dict unpacking + packing work here to ensure that only valid params values replace the env variable values?
def _exclude_params_from_envs(params, envs):
"""
params passed at inference time should override envs.
"""
return {**envs, **(params or {})}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We shouldn't expect params to set a key as None. If that's the case, it overrides envs value unless we exclude None values from params itself, otherwise {**envs, **(params or {})}
still overrides the value to None.
params
is something users explicitly pass at inference time, I think they won't pass it unless they really wants to set the key to None.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that was what I was getting at (checking for None values that would invalidate a config). However, if that's a user's intention, that's their intention.
For the code suggestion, and my line of thinking.... If a user passes "None" with the code in your PR, it deletes the key instead of preserving the key: None relationship. With dict packing / unpacking, it preserves the state of params regardless of the values supplied by the user.
Here's an ugly repro:
# Original function
def _exclude_params_from_envs_original(params, envs):
"""
params passed at inference time should override envs.
"""
return {k: v for k, v in envs.items() if k not in params} if params else envs
# Proposed function
def _exclude_params_from_envs_proposed(params, envs):
"""
params passed at inference time should override envs.
"""
return {**envs, **(params or {})}
# Test cases
def test_function(func):
# Test 1: Key in params is set to None and the same key exists in envs
envs = {"key1": "value1", "key2": "value2"}
params = {"key1": None}
result = func(params, envs)
assert result["key1"] is None, f"Test 1 failed for {func.__name__}!"
# Test 2: Key in params is set to a non-None value and the same key exists in envs
params = {"key1": "new_value1"}
result = func(params, envs)
assert result["key1"] == "new_value1", f"Test 2 failed for {func.__name__}!"
# Test 3: Key exists only in params and not in envs
params = {"key3": "value3"}
result = func(params, envs)
assert result["key3"] == "value3", f"Test 3 failed for {func.__name__}!"
# Test 4: Key exists only in envs and not in params
params = {}
result = func(params, envs)
assert result["key2"] == "value2", f"Test 4 failed for {func.__name__}!"
print(f"All tests passed for {func.__name__}!")
test_function(_exclude_params_from_envs_proposed)
All tests passed for _exclude_params_from_envs_proposed!
test_function(_exclude_params_from_envs_original)
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
/var/folders/cd/n8n0rm2x53l_s0xv_j_xklb00000gp/T/ipykernel_37720/2252776925.py in <cell line: 1>()
----> 1 test_function(_exclude_params_from_envs_original)
/var/folders/cd/n8n0rm2x53l_s0xv_j_xklb00000gp/T/ipykernel_37720/2499167076.py in test_function(func)
19 params = {"key1": None}
20 result = func(params, envs)
---> 21 assert result["key1"] is None, f"Test 1 failed for {func.__name__}!"
22
23
KeyError: 'key1'
Is this the behavior you are going for?
@@ -151,6 +151,13 @@ def _validate_model_params(task, model, params): | |||
) | |||
|
|||
|
|||
def _exclude_params_from_envs(params, envs): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a parametrized test for the behavior of this override to ensure that param entries override env variable entries as expected (checking for things like None values in param overrides so that we have effective error handling for situations like that)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM once the small test is added! TY @serena-ruan :D
Signed-off-by: Serena Ruan <[email protected]>
Signed-off-by: Serena Ruan <[email protected]> Signed-off-by: swathi <[email protected]>
🛠 DevTools 🛠
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Add several needed configs for Azure OpenAI
Solve the hanging problem due to not raising the error.
How is this PR tested?
Notebook: https://e2-dogfood.staging.cloud.databricks.com/?o=6051921418418893#notebook/4169768892693977/command/4169768892694002
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/gateway
: AI Gateway service, Gateway client APIs, third-party Gateway integrationsarea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes