https://store-images.s-microsoft.com/image/apps.42773.76fb6698-4204-4285-826f-0efde5332a64.03512617-669a-4f18-a938-a374461002c6.3701491d-03bd-443a-98b9-994ee091d4e4

OpenVINO™ Optimized Container with ONNX RT

Intel
Develop once and deploy everywhere — in the cloud or at the edge
https://store-images.s-microsoft.com/image/apps.5334.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.44aa3d10-b549-4cdc-af80-852ffe51585b
https://store-images.s-microsoft.com/image/apps.5334.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.44aa3d10-b549-4cdc-af80-852ffe51585b
https://store-images.s-microsoft.com/image/apps.18433.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.6717489e-db88-443d-bce7-7119237f4af6
https://store-images.s-microsoft.com/image/apps.53367.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.dc0187c8-b7a5-426f-85c2-6661b18cfe4e

OpenVINO™ Optimized Container with ONNX RT

Intel

5.0 (1)

Develop once and deploy everywhere — in the cloud or at the edge

The OpenVINO™ Optimized Image with ONNX RT allows high-performance deep learning inference workloads deployed on Intel® architecture. Paired together, developers can deploy ONNX models on any Intel® hardware that drives cost, power and development efficiency. With cloud-to-edge flow validated, developers can deploy the cloud/pre-trained AI models as well as apps at the edge to solve industry use cases. Thus, bridging the gap between deploying cloud-developed AI solutions and edge devices such as Intel® CPUs, GPUs, VPUs, and FPGAs. OpenVINO™ not only provides the heterogeneous hardware flexibility as single inference engine for deep learning but also other options such as Deep Learning Workbench and Reference Implementations.

Benefits:

  • High-performance, fully tested and optimized AI Container
  • Deploy anywhere – In the Cloud, On-Prem or at the Edge
  • Pre-validated with ONNX Model Zoo
  • Works with Azure ML and other Azure Services
  • Use it to improve Scoring latency & Efficiency of models with Azure Services

To pull the image, use docker pull with the base image and tags below.

Base Image: mcr.microsoft.com/azureml/onnxruntime

Default Tags:

  • CPU - :latest-openvino-cpu
  • GPU - :latest-openvino-gpu
  • MYRIAD - :latest-openvino-myriad
  • VAD-M - :latest-openvino-vadm

Note: While the above is default, developers can also dynamically switch the target hardware. Learn more about building the image from Dockerfile here.

Get started right away on Intel® hardware for free using Intel® DevCloud. Along with several examples, developers can get started with the Clean Room Worker Safety Jupyter notebook using a Tiny Yolo V2 ONNX model for object detection. Developers can also acquire developer kits from partners to jump start with hardware and software tools to prototype, test and deploy a solution. Learn more about the kits here.

By downloading and using this container and the included software, you agree to the terms and conditions of the Intel® License Agreement.

https://store-images.s-microsoft.com/image/apps.5334.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.44aa3d10-b549-4cdc-af80-852ffe51585b
https://store-images.s-microsoft.com/image/apps.5334.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.44aa3d10-b549-4cdc-af80-852ffe51585b
https://store-images.s-microsoft.com/image/apps.18433.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.6717489e-db88-443d-bce7-7119237f4af6
https://store-images.s-microsoft.com/image/apps.53367.76fb6698-4204-4285-826f-0efde5332a64.ec963ee0-224e-4f75-b556-0371c6ee626c.dc0187c8-b7a5-426f-85c2-6661b18cfe4e