You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
AS a researcher I would like to be able to compare metric results across different types of search approaches in Azure ML / ML Flow
SO, I would be able to choose best type of search for my case.
Note: currently eval step averages all metrics for all search types and log to mflow only the mean value per metric, but also uploads the full detailed table to azureml.
DoD:
query and eval steps will run per search type in parallel (like Index does with IndexConfig class)
The content you are editing has changed. Please copy your edits and refresh the page.
closes#480
### This PR includes
- Log all hyper parameters to mlflow
- Config refactoring - lowercase for all attributes as those are not constants (match coding conventions)
- Print azure ml monitoring URL right after its creation to allow easy access to monitoring (ctrl + left click)
- Fix issues with experiment and job names, allowing azure ml commands open experiment and mlflow runs automatically, while locally we create those manually.
- Remove unused experiment settings from `.env` sample file
- Hide warning azureml warnings by using `CliV2AnonymousEnvironment` as Azure ML environment name
- Temporary solution for wrong json format in Q&A Gen step with current CI generation model version by removing all "..."` strings from the model response
### WIP
#540
AS a researcher I would like to be able to compare metric results across different types of search approaches in Azure ML / ML Flow
SO, I would be able to choose best type of search for my case.
related to #529
Note: currently eval step averages all metrics for all search types and log to mflow only the mean value per metric, but also uploads the full detailed table to azureml.
DoD:
IndexConfig
class)Tasks
The text was updated successfully, but these errors were encountered: