https://store-images.s-microsoft.com/image/apps.42711.43d971af-0f9d-432f-8234-7c5d706559bd.f627c961-7ead-48bd-a769-220369ecf5e7.9b2c0c9d-5f85-4528-a359-ca8679f65a41

OpenVINO™ Execution Provider for ONNX Runtime

Intel

OpenVINO™ Execution Provider for ONNX Runtime

Intel

You are just a docker pull away from faster inferencing of your ONNX Models on Intel® Hardware.

OpenVINO™ Execution Provider for ONNX Runtime is a product designed for ONNX Runtime developers who want to get started with OpenVINO™ toolkit in their inferencing applications. This product delivers OpenVINO™ inline optimizations which enhanced inferencing performance with minimal code modifications.

This docker image provides a development environment for ONNX Runtime applications written using the Python API.

This docker image can be used to accelerate Deep Learning inference applications written using the ONNX Runtime API on the following Intel® hardware:-

To select a particular hardware option as target for Inference, use the device_type ONNX Runtime configuration option from the list of configurations options.

The default hardware target for this docker image is the Intel® CPU. To choose other targets, use the configuration option above.

Alternatively, to build a docker image with a different hardware target as the default, use this Dockerfile and provide argument --build-arg DEVICE=<device_choice> along with the docker build instruction. The device type is one of the allowed values for the device_type configuration option for OpenVINO™ Execution Provider.