Dace IT℠Intelligent Traffic Management is designed to detect and track vehicles and pedestrians.
Dace IT℠with Sense Traffic Pulse™ Intelligent Traffic Management (ITM) is designed to detect and track bikes, vehicles as well as pedestrians and to estimate a safety metric for an intersection. Object tracking recognizes the same object across successive frames, giving the ability to estimate trajectories and speeds of the objects. OpenVINO™ Toolkit 2020.4Ubuntu* 18.04
This reference implementation also detects collisions and near misses. A real-time dashboard visualizes the intelligence extracted from the traffic intersection along with annotated video stream.
This collected intelligence can be used to adjust traffic lights to optimize the traffic flow of the intersection, or to evaluate and enhance the safety of the intersection by allowing emergency services notifications, such as 911 calls, to be triggered by collision detection, reducing emergency response times.
Virtual Machine System configuration:
How It Works
The application uses the DL Streamer included in the Intel® Distribution of OpenVINO™ toolkit. Initially, the pipeline is executed with the provided input video feed and models. The DL Streamer preprocess the input and performs inference according to the pipeline settings. The inference results are parsed using callback function. These results are fed to the detection, tracking, and collision functions. Below sections explain in detail about the flow and features.
The application also has multi-channel support to use multiple input video feeds. The camera feeds can be accessed using their geographic coordinates on MapUI.
Detection is performed by using gvadetect plugin element in the DL Streamer pipeline. The pipeline consists of detection model, model proc file, target device and input stream. The object Region of Interest (ROI) are obtained from DL Streamer call functions and updated to result vector. This result vector is used for further processing.
Once the detection is obtained, the Region of Interest (ROI) results are added to the tracker system, which begins tracking the object over the successive frames. The results are updated for every frame to give feedback to the tracker with the new information:
If there is a new object, adds it to the tracking system
If the tracker lost the object, adds it again
If the ROI of the object moved from the original object, reset the tracker information for that object
If a detected object is about to exit the scene, remove the object
If a lost object (object detection missed due to obstacles or detection missed by model) that is being tracked, remove the object.
With object tracking enabled, every object’s actual and past positions can be retrieved. Object locations are averaged through a sliding window (width of 5) to filter the noise of the detection models.