To configure the neural tracker, do the following:
To reduce false alarms rate from a fish-eye camera, you have to position it properly (3). For other devices, this parameter is not valid.
Set the frame rate value for the neural network (6). The other frames will be interpolated. The higher the value, the more accurate tracking, the higher the CPU load.
Attention!
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set frame rate at 12 FPS or above (see Examples of configuring neural tracker for solving typical tasks).
You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:
to use the neural filter, set Yes in the corresponding field (9).
in the Neurofilter mode field, select a processor to be used for neural network computations (11).
Select the processor for the neural network: the CPU or one of GPUs (12).
Attention!
We recommend the GPU.
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).
If Neural Tracker is running on GPU, video tracks with object trajectories may be lagging behind the objects. If this happens, set the camera buffer size to 1 000 milliseconds (see The Video Camera Object).
In the Object type field (13), select the recognition object type, or in the Neural network file field (8), select the neural network file.
Attention!
To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
A trained neural network does a great job for a particular scene if you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.).
If the neural network file is not specified, the default file will be used, which is selected depending on the selected object type (13) and the selected processor for the neural network operation (4).
Note
For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonOne/ directory.
If you don't need to detect static objects, select Yes in the Hide stationary objects field (15). This parameter lowers the false alarm rate when detecting moving objects.
In the Track retention time field, set a time interval in seconds after which the tracking of a vehicle is considered lost (16). This helps if objects in scene temporarily obscure each other. For example, a larger vehicle may completely block the smaller one from view.
By default, the entire FoV is a detection zone. If you need to narrow down the area to be analyzed, you can set one or several detection zones.
Note
The procedure of setting zones is identical to the primary tracker's (see Setting General Zones for Scene Analytics). The only difference is that the neural tracker's zones are processed while the primary tracker's are ignored.
The next step is to create and configure the necessary detection tools. The configuration procedure is the same as for the primary tracker.
Attention!
To trigger a Motion in area detection tool under a neural network tracker, an object must be displaced by at least 25% of its width or height in FoV.
Attention!
The abandoned objects detection tool works only with the primary tracker.