OpenVINO™ Execution Provider for ONNX Runtime
Intel
OpenVINO™ Execution Provider for ONNX Runtime
Intel
OpenVINO™ Execution Provider for ONNX Runtime
Intel
You are just a docker pull away from faster inferencing of your ONNX Models on Intel® Hardware.
OpenVINO™ Execution Provider for ONNX Runtime is a product designed for ONNX Runtime developers who want to get started with OpenVINO™ toolkit in their inferencing applications. This product delivers OpenVINO™ inline optimizations which enhanced inferencing performance with minimal code modifications.
This docker image provides a development environment for ONNX Runtime applications written using the Python API.
This docker image can be used to accelerate Deep Learning inference applications written using the ONNX Runtime API on the following Intel® hardware:-
- Intel® CPU
- Intel® Integrated GPU
- Intel® Movidius™ Vision Processing Unit (VPU)
- Heterogeneous Mode
- Multi-device Mode
To select a particular hardware option as target for Inference, use the device_type ONNX Runtime configuration option from the list of configurations options.
The default hardware target for this docker image is the Intel® CPU. To choose other targets, use the configuration option above.
Alternatively, to build a docker image with a different hardware target as the default, use this Dockerfile and provide argument --build-arg DEVICE=<device_choice>
along with the docker build
instruction. The device type is one of the allowed values for the device_type
configuration option for OpenVINO™ Execution Provider.