Other parts:
Machine Learning (ML) development brings many new complexities beyond the traditional software development lifecycle. Unlike in traditional software development, ML developers want to try multiple algorithms, tools and parameters to get the best results, and they need to track this information to reproduce work. In addition, developers need to use many distinct systems to productionize models.
To solve these challenges, MLflow, an open source project, simplifies the entire ML lifecycle. MLflow introduces simple abstractions to package reproducible projects, track results, encapsulate models that can be used with many existing tools, and central respositry to share models, accelerating the ML lifecycle for organizations of any size.
Aimed at beginner or intermediate level, this three-part series aims to educate data scientists or ML developer in how you leverage MLflow as a platform to track experiments, package projects to reproduce runs, use model flavors to deploy in diverse environments, and manage models in a central respository for sharing.
Understand the four main components of open source MLflow——MLflow Tracking, MLflow Projects, MLflow Models, and Model Registry—and how each compopnent helps address challenges of the ML lifecycle.
- How to use MLflow Tracking to record and query experiments: code, data, config, and results.
- How to use MLflow Projects packaging format to reproduce runs
- How to use MLflow Models general format to send models to diverse deployment tools.
- How to use Model Registry for collaborative model lifecycle management
- How to use MLflow UI to visually compare and contrast experimental runs with different tuning parameters and evaluate metrics
In this part 1, we will cover:
- Concepts and motivation behind MLflow
- Learn how to use Databricks Community Edition (DCE)
- Tour of the the MLflow API Documentation
- Introduce MLflow Python Fluent Tracking APIs
- Walk and work through a three machine learning models using MLflow APIs in the DCE
- Use the MLflow UI as part of DCE to compare experiment metrics, parameters, and runs
- Before the session, please pre-register for Databricks Community Edition
- Knowledge of Python 3 and programming in general
- Preferably a UNIX-based, fully-charged laptop with 8-16 GB, with a Chrome or Firefox browser
- Familiarity with GitHub, git, and an account on Github
- Some knowledge of Machine Learning concepts, libraries, and frameworks
- scikit-learn
- pandas and Numpy
- matplotlib
- [optional for part-1] PyCharm/IntelliJ or choice of syntax-based Python editor
- [optional for part-1] pip/pip3 or conda and Python 3 installed
- Loads of virtual laughter, curiosity, and a sense of humor ... :-)
Familiarity with git is important so that you can get all the material easily during the tutorial and workshop as well as continue to work in your free time, after the session is over.
git clone [email protected]:dmatrix/mlflow-workshop-part-1.git or git clone https://github.com/dmatrix/mlflow-workshop-part-1.git
This tutorial will refer to documentation:
We will walk through this during the session, but please sign up for Databricks Community Edition before the session :
git clone [email protected]:dmatrix/mlflow-workshop-part-1.git
- Use this URL to log into the Databricks Community Edition
- Create a ML Runtime 6.5 Cluster
- In the brower:
- (1) Go the GitHub notebooks subdirectory
- (2) Download MLFlow-CE.dbc file on your laptop
- Import the MLFlow-CE.dbc file into the Databricks Community Edition
Let's go!
Cheers,
Jules