The OpenVINO™ Optimized Image with ONNX RT allows high-performance deep learning inference workloads deployed on Intel® architecture. Paired together, developers can deploy ONNX models on any Intel® hardware that drives cost, power and development efficiency. With cloud-to-edge flow validated, developers can deploy the cloud/pre-trained AI models as well as apps at the edge to solve industry use cases. Thus, bridging the gap between deploying cloud-developed AI solutions and edge devices such as Intel® CPUs, GPUs, VPUs, and FPGAs. OpenVINO™ not only provides the heterogeneous hardware flexibility as single inference engine for deep learning but also other options such as Deep Learning Workbench and Reference Implementations.
To pull the image, use
docker pull with the base image and tags below.
Base Image: mcr.microsoft.com/azureml/onnxruntime
Get started right away on Intel® hardware for free using Intel® DevCloud. Along with several examples, developers can get started with the Clean Room Worker Safety Jupyter notebook using a Tiny Yolo V2 ONNX model for object detection. Developers can also acquire developer kits from partners to jump start with hardware and software tools to prototype, test and deploy a solution. Learn more about the kits here.
By downloading and using this container and the included software, you agree to the terms and conditions of the Intel® License Agreement.