Video requirements for scene analytics detection tools Video stream and scene requirements for Neurotracker operation |
To configure the Scene Analytics detection tools based on Neurotracker, do the following:
Below the required camera, click Create… → Category: Trackers → Neurotracker.
By default, the detection tool is enabled and set to detect people.
If necessary, you can change the settings of the detection tool parameters given in the table:
Parameter | Value | Description | |
---|---|---|---|
Object features | |||
Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value | |
No | |||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed | |
Second stream | |||
Other | |||
Enable | Yes | By default, the detection tool is enabled. To disable, select the No value | |
No | |||
Name | Neurotracker | Enter the detection tool name or leave the default name | |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding | |
CPU | |||
GPU | |||
HuaweiNPU | |||
Neurofilter mode | CPU | Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation, General information on configuring detection).
| |
Nvidia GPU 0 | |||
Nvidia GPU 1 | |||
Nvidia GPU 2 | |||
Nvidia GPU 3 | |||
Intel NCS (not supported) | |||
Intel HDDL (not supported) | |||
Intel GPU | |||
Huawei NPU | |||
Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate tracking, but the load on the CPU is also higher. The value must be in the range [0.016; 100].
| |
Type | Neurotracker | Name of the detection tool type (non-editable field) | |
Advanced settings | |||
Camera position | Wall | To eliminate false positives when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | |
Ceiling | |||
Hide moving objects | Yes | If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime
| |
No | |||
Hide static objects | Yes | If you don't need to detect static objects, select the Yes value. This parameter lowers the number false positives when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.
| |
No | |||
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the neurotracker to display the object's track. The higher the value, the more is the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter may lead to false positives. The value must be in the range [2, 100] | |
Model quantization | Yes | To quantize the network, select the Yes value. This parameter allows you to reduce the consumption of the GPU processing power.
| |
No | |||
Neural network file | If you use a unique neural network, select the corresponding file.
| ||
Scanning window | Yes | To enable the scanning mode, select the Yes value (see Configuring the Scanning mode) | |
No | |||
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | |
Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window respectively, the segments will line up one after another. Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load.
| |
Scanning window step width | 0 | ||
Scanning window width | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | |
Selected object class | If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
| ||
Similitude search | Yes | To enable the search for similar persons, select the Yes value. If you enabled the parameter, it increases the processor load.
| |
No | |||
Time of processing similitude track (sec) | 0 | Specify the time in the range [0; 3600] required for the algorithm to process the track to search for similar persons | |
Time period of excluding static objects | 0 | Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0; 86 400] | |
Track retention time | 0.7 | Specify the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block the smaller one from view. The value must be in the range [0.3, 1000] | |
Basic settings | |||
Detection threshold | 30 | Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some triggers may not be considered. The value must be in the range [0.05, 100] | |
Neurotracker mode | CPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, General information on configuring detection).
| |
Nvidia GPU 0 | |||
Nvidia GPU 1 | |||
Nvidia GPU 2 | |||
Nvidia GPU 3 | |||
Intel NCS (not supported) | |||
Intel HDDL (not supported) | |||
Intel GPU | |||
Huawei NPU | |||
Object type | Person | Select the recognition object | |
Person (top-down view) | |||
Vehicle | |||
Person and vehicle (Nano)—low accuracy, low processor load | |||
Person and vehicle (Medium)—medium accuracy, medium processor load | |||
Person and vehicle (Large)—high accuracy, high processor load | |||
Neural network filter | |||
Neurofilter | Yes | To use the neurofilter to sort out certain tracks, select the Yes value. For example, the neurotracker detects all freight trucks, and the neurofilter sorts out only the tracks that contain trucks with cargo door open | |
No | |||
Neurofilter file | Select a neural network file |
If necessary, in the preview window, set detection areas with the help of anchor points (the same as with the excluded areas of the Scene analytics detection tools, see Setting General Zones for Scene analytics detection tools). By default, the whole frame is a detection area.
For convenience of configuration, you can "freeze" the frame. Click the The detection area is displayed by default. To hide it, click the |
To save the parameters of the detection tool, click the Apply button. To cancel the changes, click the Cancel button.
The next step is to create and configure the necessary detection tools on the basis of neurotracker. The configuration procedure is the same as for the basic tracker (see Setting up Tracker-based Scene Analytics detection tools).
|