Model monitoring for a frictionless view of ML model performance on Azure
M-AInA, a unified AI/ML model monitoring template that provides a view of monitoring metrics of deployed model on Azure platform.
M-AInA, provides a new way to monitor your AI/ML models along with providing insights around drift management, ground truth evaluation and model explanations along with what-if analysis. The outcome being full visibility on model performance, easing the process of managing any misbehavior in the models.
M-AInA provides easy interoperability with the Azure cloud services. It provides in-built functionality with pre-defined KPIs to help monitor the model performance and drift using key Azure cloud components like Azure monitor, Azure ML.
The key axes which help provide a frictionless view of AI/ML models with M-AInA are: Model Drift: Helps evaluate the model performance over a period and a ground truth comparison of the models.
Explainable AI: Insights covering the model explanations and importance of features for a particular model, to help decipher model insights for business teams.
Data Drift: Provides a view of data comparison across multiple versions and features of the models contributing to the drift.
Service Health: Offers a detailed view across key performance metrics and includes automated triggers and alerts (for ex. Model retraining).
The framework today covers drift management in structured data and classification, Regression and Forecasting models. Within the scope of PoC a model monitoring setup would be done on a selected model registered in Azure ML.
Note: The Scope of PoC will include setting up of model monitoring framework and end to end monitoring for one model. Model drift requires availability of ground truth.
To help exemplify the outcome of M-AInA, our 6-week POC engagement plan covers monitoring setup of one model:
Week 2 and 3:
Week 4, 5 and 6: