Section | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
Configuring the detector
To configure the Neural tracker, do the following:
- Go to the Detectors tab.
Below the required camera, click Create… → Category: Trackers → Neural tracker.
By default, the detector is enabled and set to detect moving people.
If necessary, you can change the detector parameters. The list of parameters is given in the table:
Parameter | Value | Description | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Object features | ||||||||||||
Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value | ||||||||||
No | ||||||||||||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed | ||||||||||
Other | ||||||||||||
Enable | Yes | By default, the detector is enabled. To disable, select the No value | ||||||||||
No | ||||||||||||
Name | Neural tracker | Enter the detector name or leave the default name | ||||||||||
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use uses the Intel Quick Sync Video technology. Otherwise, CPU resources will be are used for decoding | ||||||||||
CPU | ||||||||||||
GPU | ||||||||||||
HuaweiNPU | ||||||||||||
Neural filter mode | CPU | Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
| ||||||||||
Nvidia GPU 0 | ||||||||||||
Nvidia GPU 1 | ||||||||||||
Nvidia GPU 2 | ||||||||||||
Nvidia GPU 3 | ||||||||||||
Intel NCS (not supported) | ||||||||||||
Intel HDDL (not supported) | ||||||||||||
Intel GPU | ||||||||||||
Huawei NPU | ||||||||||||
Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range range [0.016, 100].
| ||||||||||
Type | Neural tracker | Name of the detector type (non-editable field) | ||||||||||
Advanced settings | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | ||||||||||
Ceiling | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime
| ||||||||||
Color detection (starting with Detector Pack 3.14 version) | Yes | By default, color detection is enabled. This parameter collects the data about the object's color. These data are necessary for further search in the archive by color. | ||||||||||
No | ||||||||||||
Type | Neural tracker | Name of the detector type (non-editable field) | ||||||||||
Advanced settings | ||||||||||||
Camera position | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | ||||||||||
Ceiling | ||||||||||||
Scanning mode | ||||||||||||
No | ||||||||||||
Hide static objects | Yes | By default, the parameter is enabled. If you need to detect static objectsdisabled. To use the Scanning mode, select the Yes value. (see Configuring the scanning mode) | ||||||||||
No | ||||||||||||
Hide moving objects | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. No value. This parameter lowers the number false events from the detector when detecting moving objects. An object is considered static if it has not moved doesn't change its position more than 10% of its width or height during the whole time of its track existence.lifetime
| ||||||||||
No | ||||||||||||
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detector. The value must be in the range [2, 100] | Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPU. It allows you to reduce the consumption of computation power. The neural network is selected automatically depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
| ||||||||
No | ||||||||||||
Hide static objects | Yes | By default, the parameter is enabled. If you need to detect static objects, select the No value. This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it hasn't moved more than 10% of its width or height during the whole time of its track existence
| ||||||||||
No | ||||||||||||
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detector. The value must be in the range [2, 100] | ||||||||||
Model quantization | Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value | ||||||||||
No | ||||||||||||
Neural network file | If you use a custom neural network, select the corresponding file |
| ||||||||||
| Yes | By default, the parameter is disabled. To enable the scanning mode, select the Yes value (see Configuring the scanning mode) | ||||||||||
No | ||||||||||||
| ||||||||||||
No | ||||||||||||
Neural network file | When using a custom neural network, select the corresponding file. Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS
| |||||||||||
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | ||||||||||
Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU.
| ||||||||||
Scanning window widthScanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels | ||||||||||
Scanning window step heightwidth | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments | will line are lined up one after another. Reducing the height or width of the scanning step | will increase increases the number of windows due to their overlapping each other with an offset. This | will increase increases the detection accuracy | , but | will can also increase the | CPU load.
Note | ||
---|---|---|
| ||
The height and width of the scanning step must not be greater than the height and width of the scanning window—the detector will not operate with such settings. |
load on the CPU.
Note | ||
---|---|---|
|
The height and width of the scanning |
step mustn't be greater than the height and width of the scanning window |
since the detector doesn't operate with such settings. |
If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle.
- If you leave the field blank, the tracks of all available classes from the neural network
will be - are displayed (Detection neural network, Neural network file).
- If you specify a class/classes from the neural network, the tracks of the specified class/classes
will be - are displayed (Detection neural network, Neural network file).
- If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network
will be - are displayed (Detection neural network, Neural network file).
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network
will be are displayed (Detection neural network, Neural network file)
Info title Note Starting with Detector Pack
33.10.2, if you specify a class/classes missing from the neural network, the tracks
won’t be isn't displayed (Detection neural network, Neural network file).
Sensitivity of excluding static objects (starting with Detector Pack 3.14)
Specify the level ofsensitivity of excluding static objects. The higher the value, the less sensitive to motion the algorithm becomes. The value must be in the range [0, 100]
By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enabled enable the parameter, it increases the processor loadthe load on the CPU.
Note | ||
---|---|---|
| ||
The Similitude search works only on tracks of people. |
Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600]
Specify the Detection threshold for objects in percent.
IfThe system ignores the data if the recognition probability falls below the specified value
, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detector may not be considered. The value must be in the range [0.05, 100]
Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
Note | ||
---|---|---|
| ||
|
|
|
Select the detection neural network from the list. By default, the Person detection neural network is selected. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition
By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.
For example:
The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo door doors open
Select a neural network filenetwork file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS.
Note | ||
---|---|---|
| ||
|
By default, the entire frame is a detection area. If necessary, in the preview window, you can set detection areas with the help of anchor points (see Configuring a detection area).
Info | ||
---|---|---|
| ||
For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again. The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again. |
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Neural tracker is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of Neural the neural tracker (see Standard sub-detectors).
Note | ||
---|---|---|
| ||
To get an event from the Motion in area sub-detector on the basis of the Neural tracker, an object must be displaced by at least 25% of its width or height in the frame. |
Example of configuring the Neural tracker for solving typical tasks
Parameter | Task: detection of moving people | Task: detection of moving vehicles |
---|---|---|
Other | ||
Number of frames processed per second | 6 | 12 |
Neural network filter | ||
Neural filter | No | No |
Basic settings | ||
Detection threshold | 30 | 30 |
Advanced settings | ||
Minimum number of detection triggers | 6 | 6 |
Camera position | Wall | Wall |
Hide static objects | Yes | Yes |
Neural network file | Path to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank | Path to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank |