The AI Extension module for OpenVINO™ Model Server is a high-performance Edge module for serving machine learning models. Developers can send video frames and receive inference results from the OpenVINO™ Model Server. Powered by OpenVINO™ toolkit, it enables developers to build, optimize and deploy deep learning inference workloads for maximum performance across Intel® architectures. The AI extension module works with streaming analytics platforms such as Live Video Analytics on IoT Edge via RESI API. This allows easy deployment of algorithms and AI models supported by OpenVINO™ toolkit.
Note: To build for other OS and models use Dockerfiles for Clear Linux, Ubuntu, and Cent OS - CPU, iGPU, VPU
It is easy to get started on Intel® hardware and experience Intel® Distribution of OpenVINO™ toolkit with a free sign-up for Intel® DevCloud for the Edge. Developers can then select & purchase from several Intel®-based accelerator/developer kits to install software stacks to further develop their applications. Advanced solution developers can purchase industry specific RFP Ready Kits.
By downloading and using this container and the included software, you agree to the terms and conditions under the License Agreement tab.
Intel is committed to respecting human rights and avoiding complicity in human rights abuses. See Intel’s Global Human Rights Principles. Intel’s products and software are intended only to be used in applications that do not cause or contribute to a violation of an internationally recognized human rights