Video requirements for scene analytics detection tools Video stream and scene requirements for neural tracker operation |
To configure the neural tracker, do the following:
To reduce the number of false positives from a fish-eye camera, you have to position it properly (3). For other devices, this parameter is not valid.
Set the frame rate value for the neural network to process per second (6). The higher the value, the more accurate tracking, but the load on the CPU is also higher.
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you should set the frame rate at 12 FPS or above (see Examples of configuring neural tracker for solving typical tasks). |
You can use the neural filter to sort out certain tracks. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:
to use the neural filter, select Yes in the corresponding field (9).
in the Neurofilter mode field, select a processor to be used for neural network work (11, see General Information on Configuring Detection).
Select the processor for the neural network — CPU, one of NVIDIA GPUs or one of Intel GPUs (12, see Hardware requirements for neural analytics operation, General Information on Configuring Detection).
We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics). If Neural Tracker is running on GPU, object tracks may be lagging behind the objects. If this happens, set the camera buffer size to 1000 milliseconds (see The Video Camera Object). |
In the Object type field (13), select the recognition object type, or in the Neural network file field (8), select the neural network file.
To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training). A trained neural network for a particular scene allows you to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.). If the neural network file is not specified, the default file will be used, which is selected depending on the selected object type (13) and the selected processor for the neural network operation (4). |
To ensure the correct operation of the neural network on Linux OS, the corresponding file should be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. |
To enable the search for similar persons, in the Similitude search field (14), select Yes. It increases the CPU load.
The Similitude search works only on tracks of people. |
If you don't need to detect static objects, select Yes in the Hide stationary objects field (17). This parameter lowers the number false positives when detecting moving objects.
If necessary, enable the Model quantization option (18). It allows you to reduce the consumption of the GPU processing power.
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%. |
Model quantization is only applicable to NVIDIA GPUs. |
The first launch of a detection tool with quantization enabled may take longer than a standard launch. If GPU caching is used, next time a detection tool with quantization will run without delay. |
In the Track retention time field, set a time interval in seconds after which the object track is considered lost (20). This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block the smaller one from view.
By default, the entire FOV is a detection area. If you need to narrow down the area to be analyzed, you can set one or several detection areas in the preview window.
The procedure of setting areas is identical to the primary tracker's (see Setting General Zones for Scene analytics detection tools). The only difference is that the neural tracker areas are processed while the primary tracker areas are ignored. |
The next step is to create and configure the necessary detection tools on the basis of neurotracker. The configuration procedure is the same as for the primary tracker (see Setting up Tracker-based Scene Analytics detection tools).
To trigger a Motion in Area detection tool under a neural network tracker, an object should be displaced by at least 25% of its width or height in FOV. |
The abandoned objects detection tool works only with the primary tracker. |