Logicalis MLOps Platform accelerates your AI development processes
An MLOps platform provides data scientists and software engineers with a collaborative environment that facilitates iterative data exploration, real-time co-working capabilities for experiment tracking, feature engineering, and model management, as well as controlled model transitioning, deployment, and monitoring.
The AI development is now is fast-faced. After the enterprise developed the AI model, they are now facing the model deployment, monitoring and maintenance issue. And this is coincident with the MLOps concept that raised in recent years. MLOps claims AI development should also include DevOps that assist the model to continuous integrate, delivery and deploy.
Logicalis sees the potential value of MLOps in market. Logicalis works with Microsoft Azure and InfuseAI to become the first enterprise devoted to MLOPs maintenance solution consultant. We deploy this project to Microsoft Azure Marketplace, so we can link the entire life circle of AI to its maintenance by simple deployment and monitoring system. Logicalis’s consultant service will go through PrimeHub’s development to entire AI’s life circle.
PrimeHub is the platform for AI’s development and management as part of the PaaS service. Through PrimeHub, it is more sufficient for enterprise to establish different framework of calculation resource, data processes and the authorization for team management. Therefore the enterprise can easily devote into AI developing and also having low threshold of developing AI and its management cost.
Logicalis hopes to deploy PrimeHub on Azure platform so the data scientist and engineering team can effectively use the PrimeHub platform and can also enhance the corporation between these two via the stage. The consultant team also provide many services that designed for each individual data science team and system management team.
As for the data science team, the platform provides the model deployment framework. Currently, the system supports varies frameworks as Tensorflow, PyTorch, SkLearn and etc. This is to assist the data science team to run the model and packed as API service with minutes without other support. On the other hand, the data science could also install their own Dockerfile to process before API then deploy through PrimeHub Deploy. One can also check the system log of API deployment status and also provide the computing resources for real time monitoring. So one can have the best computing configuration. And this can reduce working days as compared to 1 or 2 days in the past.
As for the system management, the platform provides the each API’s deployment overview, real time estimation service, elastic load balancing, authorization mechanism, API deployment logs and etc. In the meantime, it also provides the forecasting service for real time monitoring which include Queries Per Second (QPS) monitoring, latency and etc. So one can forecast more accurate data in resource management and system growth estimation.
Besides the basic function of the interface, the system also integrated the common AI developing IDE, language, algorithm, library and the common data storage like Hadoop, CSV, S3, and SQL. And in the PrimeHub Deploy, it integrated the GitHub, Jenkin and CI/CD tools, Tableau, Power BI and BI tools, Seldon, Grafana and other deploy monitoring tools.
In the meantime, the PrimeHub series also support Multi-AZ nd in Kubernetes, public cloud provider, OpenShift and other platforms can deploy and conduct the Nvidia GPU computing resource management and scheduling to provide the PaaS Service.
Logicalis provides consultant service and focus on the model deployment and monitoring management. As comparing with PrimeHub to fulfill the developing team’s need, the consultant service leaning more of stratify engineering team’s requirements. This is to elevate the efficiency the usage of computing resource and data reading speed. This is to maintain uninterrupted forecasting monitoring service.
Scope of works for Assessment: