Some parameters can be bulk configured for Situation Analysis detection tools. To configure them, do as follows:
Select the Object tracker object.
By default, video stream's metadata are recorded in the database. You can disable it by selecting No in the Record objects tracking list (1).
Attention!
Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it.
If a video camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
Attention!
To display object trajectories properly, make sure that all video streams from multi-streaming camera have the same aspect ratio settings.
If you require automatic adjustment of the sensitivity of scene analytic detection tools, in the Auto sensitivity list, select Yes (3).
Note
Enabling this option is recommended if the lighting fluctuates significantly in the course of the video camera's operation (for example, in outdoor conditions).
To reduce false alarms rate from a fish-eye camera, you have to position it properly (4). For other devices, this parameter is not valid.
Analyzed framed are scaled down to a specified resolution (8, 1280 pixels on the longer side). This is how it works:
If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.
If the resulting resolution falls below the specified value, it is used further.
If the resulting resolution still exceeds the specified limit, it is divided by two, etc.
Note
For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.
In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the number of pixels on the longer side exceeds the limit (1024 > 1000).
If necessary, configure the neural network filter. The neural network filter processes the results of the tracker and filters out false alarms on complex video images (foliage, glare, etc.).
Enable the filter by selecting Yes (1).
Select the processor for the neural network – CPU, one of GPUs or a IntelNCS (2).
Attention!
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
Attention!
A neural network filter can be used either only for analyzing moving objects, or only for analyzing abandoned objects. You cannot operate two neural networks simultaneously.
The general parameters of the situation analysis detection tools are now set.