Lobe model HTTP Extension for AVA on Edge, Inc.

Lobe model HTTP Extension for AVA on Edge, Inc.

Lobe export model inferencing based on ONNX

This module is developed from 「 ONNX Sample」 to be able to run on IoT Edge.

While running the IoT Edge module, you can use in conjunction with Azure Video Analyzer. It can be used as an inferencing server for HTTP extension node in Azure Video Analyzer on Edge. The performance of the model inferencing server depends on the CPU power.

This module can be used with the Azure cloud native application WeDX Flow to simplify module management.

Minimum hardware requirements: Linux x64 and arm64 OS, 1GB of RAM, 850Mb of storage


  • Inferencing server for AVA HTTP extension node - Classification inference type
  • The default port(Nginx proxy) of the module is 80 and binding port is 8170
  • Telemetry of inference results
  • Video stream of inference results - {SERVER IP}:8170/stream/video

Getting Started

  • Export ONNX model on app
  • Check two important files in the export directory
    • model.onnx
    • signature.json
  • Copy two files to Lobe Model Directory (default:/var/lib/lobemodel)

Direct methods

  • Not available

Environment variables

  • Not available

Desired properties

  • ONNX model directory (default:/var/lib/lobemodel)
    • "LobeModelDirectory": "/var/lib/lobemodel"
  • Send telemetory of inference results (default:true)
    • "SendTelemetry": true / false
  • Stream Video of inference results (default:true)
    • "ViewVideoStream": true / false

AVA(Azure Video Analyzer) Integration

  • Pipeline Topology sample (HttpExtension)
    • "url": "rtsp://{ModuleName}/store"
    • "mode": "preserveAspectRatio"
    • "width": "416"
    • "height": "416"
    • "@type": "#Microsoft.VideoAnalyzer.ImageFormatBmp"


  • /score
    • Getting a list of detected objects
  • /annotate
    • Seeing the bounding boxes overlaid on the image
  • /score-debug
    • Getting the list of detected objects and also generate an annotated image