Kubernetes operator for ML model monitoring over KFServing, with Kafka, Spark and the Model Monitoring framework
-
Install the Model Monitoring operator by choosing one of the versions available in 'install' folder.
kubectl create -f install/v1beta1/model-monitoring.yaml
-
Define and apply a Model Monitor resource (check 'config/samples/monitoring_v1beta1_modelmonitor.yaml' as an example)
kubectl apply -f model-monitor.yaml
-
Serve your model with KFServing specifying the following logger url: 'http://<model-monitor-name>-inferencelogger.<namespace>'
-
Check the inference analysis of your model by visiting the sinks specified in your Model Monitor.
The operator deploys an Inference Logger to forward enriched inference logs to Kafka. Then it deploys a Spark job that consumes the corresponding Kafka topics and analyses the logs using a custom implementation of the Model Monitoring framework.
In order to see the available statistics, outliers and drift detectors check the documentation of the framework.