- コンサルティング サービス
ML Ops Framework Setup : 12-Week Implementation
Automated ML model management (MLOps) to generate higher RoI on Data Science investments and increase the Business User’s confidence in analytical insights
Objective: Setup an end-to-end MLOPs pipeline for upto 6 ML models and enable clients with a clear MLOPs framework to onboard newer ML models to production going forward.
Key Challenges Addressed:
Outcome:
Implementation Plan The break-up of the implementation plan is as below: • Week 1-2: Spent on ‘discovery’ to understand the business and ML models, data sources and downstream applications. • Week 3-6: Integration of model pipelines and drift calculation for two models, and setup of model testing framework. • Week 7-9: Model drift calculation for three models and activation of visual provenance graph. The CI/CD pipeline for model deployment using Azure DevOps is also created during this time. • Week 10-12: Drift calculation and visual provenance graph for all models, centralized model monitoring, documentation and a MLOps roadmap for the future.
This implementation uses the following native Azure components: • Azure Git: Allowing changes to the repository in a controlled way, allowing coordination between many people without accidentally overwriting or corrupting files • App Services: The monitoring web app and python backend code is hosted on azure Linux app services. Both the apps can be scaled automatically or manually on demand. • Microsoft Azure Data Factory: Used to fetch the status information of Data factory pipelines to track. • Databricks Workspace: MLFlow component of Databricks is used to fetch the data stored by notebook during execution. • Cosmos DB: With the flexibility of schema and changing nature of data, NoSQL helps accommodate requirements.