Go to documentation repository
Page History
Section | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Configuring the detection tool
To configure the neural tracker-based Scene Analytics detection tools Neural tracker, do the following:
...
- Go to the Detection Tools tab.
Below the required camera, click Create… → Category: Trackers → Neural tracker.
By default, the detection tool is enabled and set to detect moving people.
If necessary, you can change the detection tool parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, |
...
select the No value | ||
No | ||
Video stream | Main stream | If the camera supports multistreaming, |
...
select the stream for which detection is needed |
...
Other | ||
Enable | Yes | By default, the detection tool is enabled. To disable, select the No value |
No | ||
Name | Neural tracker | Enter the detection tool name or leave the default name |
Decoder mode | Auto | Select a processing resource for decoding video streams |
...
. When you select a GPU, a stand-alone graphics card takes priority (when decoding with |
...
Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding |
...
CPU | |||||||
GPU | |||||||
HuaweiNPU | |||||||
Neural filter mode | CPU | Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
| |||||
Nvidia GPU 0 | |||||||
Nvidia GPU 1 | |||||||
Nvidia GPU 2 | |||||||
Nvidia GPU 3 | |||||||
Intel NCS (not supported) | |||||||
Intel HDDL (not supported) | |||||||
Intel GPU | |||||||
Huawei NPU | |||||||
Number of frames processed per second | 6 | Specify the number of frames |
...
for the neural network to process per second |
...
. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100].
|
...
|
...
|
...
| |||||||
Type | Neural tracker | Name of the detection tool type (non-editable field) | |||||
Advanced settings | |||||||
Camera position | Wall | To sort out false events from the detection tool when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | |||||
Ceiling | |||||||
Hide moving objects | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime
| |||||
No | |||||||
Hide static objects | Yes | By default, the parameter is disabled. If you don't need to detect static objects, select the Yes value. This parameter lowers the number false events from the detection tool when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.
| |||||
No | |||||||
Minimum number of detection triggers | 6 | Specify the |
Minimum number of detection triggers for the |
...
Neural tracker to display the object's track |
...
. The higher the value, the |
...
longer the time interval between the detection of an object |
...
and the display of its track on the screen. Low values |
...
of this parameter can lead to false events from the detection tool. The value must be in the range [2, 100] | |||||||
Model quantization | Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPU. It allows you to reduce the consumption of computation power. The neural network is selected automatically depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
| |||||
No | |||||||
Neural network file | If you use a custom |
...
neural network, select the corresponding file |
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
|
...
to use the neural filter, select Yes in the corresponding field (9).
- in the Neurofilter file field, select a neural network file (10).
in the Neurofilter mode field, select a processor to be used for neural network work (11, see General Information on Configuring Detection).
Select the processor for the neural network—CPU, one of NVIDIA GPUs or one of Intel GPUs (12, see Hardware requirements for neural analytics operation, General Information on Configuring Detection).
Note | ||
---|---|---|
| ||
We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics). If Neural Tracker is running on GPU, object tracks may be lagging behind the objects. If this happens, set the camera buffer size to 1000 milliseconds (see The Video Camera Object). |
...
In the Object type field (13), select the recognition object:
- Human.
- Human (top view).
- Vehicle.
- Human and Vehicle (Nano)—low accuracy, low processor load.
- Human and Vehicle (Medium)—medium accuracy, medium processor load.
Human and Vehicle (Large)—high accuracy, high processor load.
To enable the search for similar persons, in the Similitude search field (14), select Yes. It increases the processor load.
Note | ||
---|---|---|
| ||
The Similitude search works only on tracks of people. |
...
| ||||||
Scanning window | Yes | By default, the parameter is disabled. To enable the scanning mode, select the Yes value (see Configuring the scanning mode) | ||||
No | ||||||
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | ||||
Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window respectively, the segments will line up one after another. Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load.
|
...
If a stationary object starts moving, the detection tool will trigger, and the object will no longer be considered stationary.
If necessary, enable the Model quantization option (18). It allows you to reduce the consumption of the GPU processing power.
...
title | Attention! |
---|
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
Model quantization is only applicable to NVIDIA GPUs.
The first launch of a detection tool with quantization enabled may take longer than a standard launch.
| ||
Scanning window step width | 0 | |
Scanning window width | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
Selected object classes |
...
If necessary, specify the class of the detected object |
...
. |
...
If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10. |
...
embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle.
|
...
|
...
|
...
|
...
|
...
|
...
|
...
| |||||||
Similitude search | Yes | By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enabled the parameter, it increases the processor load.
| |||||
No | |||||||
Time of processing similitude track (sec) | 0 | Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600] | |||||
Time period of excluding static objects | 0 | Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0, 86 400] | |||||
Track retention time | 0.7 | Specify the time in seconds after which the object track is considered lost |
...
. This helps if objects in scene temporarily overlap each other. For example, when a larger vehicle |
...
completely |
...
blocks the smaller one from view. |
...
By default, the entire FOV is a detection area. If you need to narrow down the area to be analyzed, you can set one or several detection areas in the preview window.
Info | ||
---|---|---|
| ||
The procedure of setting areas is identical to the primary tracker's (see Setting General Zones for Scene analytics detection tools). The only difference is that the neural tracker areas are processed while the primary tracker areas are ignored. |
The value must be in the range [0.3, 1000] | |||||||
Basic settings | |||||||
Detection threshold | 30 | Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detection tool may not be considered. The value must be in the range [0.05, 100] | |||||
Neural tracker mode | CPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors).
| |||||
Nvidia GPU 0 | |||||||
Nvidia GPU 1 | |||||||
Nvidia GPU 2 | |||||||
Nvidia GPU 3 | |||||||
Intel NCS (not supported) | |||||||
Intel HDDL (not supported) | |||||||
Intel GPU | |||||||
Huawei NPU | |||||||
Detection neural network | Person | Select the detection neural network from the list. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition | |||||
Person (top-down view) | |||||||
Person (top-down view Nano) | |||||||
Person (top-down view Medium) | |||||||
Person (top-down view Large) | |||||||
Vehicle | |||||||
Person and vehicle (Nano) | |||||||
Person and vehicle (Medium) | |||||||
Person and vehicle (Large) | |||||||
Neural network filter | |||||||
Neural filter | Yes | By default, the parameter is disabled. To sort out parts of tracks, select the Yes value. For example: Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo door open | |||||
No | |||||||
Neural filter file | Select a neural network file
|
By default, the entire frame is a detection area. If necessary, in the preview window, set detection areas with the help of anchor points (see Configuring a detection area).
Info | ||
---|---|---|
| ||
For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again. The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again. |
To save the parameters of the detection tool, click the Apply button. To cancel the changes, click the Cancel button.
If necessary, you can create and configure the necessary sub-detectors on the basis of Neural tracker (see Standard sub-detectors
...
).
Note | ||
---|---|---|
| ||
To |
...
get an event from the Motion in area sub-detector on the basis of Neural tracker, an object |
...
must be displaced by at least 25% of its width or height in |
...
the frame. |
Example of configuring Neural tracker for solving typical tasks
Parameter | Task: detection of moving people | Task: detection of moving vehicles |
---|---|---|
Other | ||
Number of frames processed per second | 6 | 12 |
Neural network filter | ||
Neural filter | No | No |
Basic settings | ||
Detection threshold | 30 | 30 |
Advanced settings | ||
Minimum number of detection triggers | 6 | 6 |
Camera position | Wall | Wall |
Hide static objects | Yes | Yes |
Neural network file | Path to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank | Path to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank |
...