TinyYOLOv3 HTTP Extension for AVA on Edge, Inc.

TinyYOLOv3 HTTP Extension for AVA on Edge, Inc.

Tiny YOLOv3 model inferencing based on ONNX

This module is developed from 「IoT Edge module extensions to be used with Azure Video Analyzer」 repository to be able to run on IoT Edge.

While running the IoT Edge module, you can use in conjunction with Azure Video Analyzer. It can be used as an inferencing server for HTTP extension node in Azure Video Analyzer on Edge. The performance of the TinyYOLOv3 model inferencing server depends on the CPU power.

This module can be used with the Azure cloud native application WeDX Flow to simplify module management.

Minimum hardware requirements: Linux x64 and arm64 OS, 1GB of RAM, 800Mb of storage


  • Inferencing server for AVA HTTP extension node
  • The default port(Nginx proxy) of the module is 80 and binding port is 8150
  • Telemetry of inference results
  • Video stream of inference results - {SERVER IP}:8150/stream/video

Direct methods

  • Not available

Environment variables

  • Not available

Desired properties

  • Send telemetory of inference results (default:true)
    • "SendTelemetry": true / false
  • Stream Video of inference results (default:true)
    • "ViewVideoStream": true / false

AVA(Azure Video Analyzer) Integration

  • Pipeline Topology sample (HttpExtension)
    • "url": "rtsp://{ModuleName}/store"
    • "mode": "preserveAspectRatio"
    • "width": "416"
    • "height": "416"
    • "@type": "#Microsoft.VideoAnalyzer.ImageFormatBmp"


  • /score
    • Getting a list of detected objects
  • /score?stream={id}
    • Viewing the output video with inferencing overlays in a browser
  • /annotate
    • Seeing the bounding boxes overlaid on the image
  • /score-debug
    • Getting the list of detected objects and also generate an annotated image