Go to documentation repository
Page History
...
Parameter | Value | Description | ||||||||
---|---|---|---|---|---|---|---|---|---|---|
Object features | ||||||||||
Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value
| ||||||||
No | ||||||||||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed | ||||||||
Other | ||||||||||
Enable | Yes | By default, the detector is enabled. To disable, select the No value | ||||||||
No | ||||||||||
Name | Neural tracker | Enter the detector name or leave the default name | ||||||||
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources are used for decoding | ||||||||
CPU | ||||||||||
GPU | ||||||||||
HuaweiNPU | ||||||||||
Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100]
| ||||||||
Type | Neural tracker | Name of the detector type (non-editable field) | ||||||||
Advanced settings | ||||||||||
Camera position | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant | ||||||||
Ceiling | ||||||||||
Color detection (starting with Detector Pack 3.14)Hide moving objects | Yes | By default, the parameter is If you set the No value, color detection becomes unavailable when searching in the archive. Disabling the color detection reduces the load on the CPU, including when the detector runs on GPU | ||||||||
No | ||||||||||
disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime
| ||||||||||
No | ||||||||||
Hide static Hide moving objects | Yes | By defaultStarting with Detector Pack 3.14, the parameter is disabled by default. If you don't need to detect moving hide static objects, select the Yes valuethe Yes value. This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it doesnhasn't change its position moved more than 10% of its width or height during the whole time of its track lifetimeexistence
| ||||||||
No | ||||||||||
| ||||||||||
No | ||||||||||
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to | Hide static objects | Yes | false events from the detector | when detecting moving objects. An object is considered static if it hasn't moved more than 10% of its width or height during the whole time of its track existence
Note | ||
---|---|---|
| ||
|
By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
Note | ||
---|---|---|
| ||
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
|
By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
Note | ||
---|---|---|
| ||
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
|
If you use a custom neural network, select the corresponding file.
Note | ||
---|---|---|
| ||
|
The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU
Note | ||
---|---|---|
| ||
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU
Note | ||
---|---|---|
| ||
The height and width of the scanning step mustn't be greater than |
If you use a custom neural network, select the corresponding file.
Note | ||
---|---|---|
| ||
|
The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU
Note | ||
---|---|---|
| ||
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
the height and width of the scanning window, |
Note | ||
---|---|---|
| ||
The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
since the detector doesn't operate with such settings. |
If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle
- If you leave the field blank, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
- If you specify a class/classes from the neural network, the tracks of the
If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle
- If you leave the field blank, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
- If you specify a class/classes from the neural network, the tracks of the specified class/classes are displayed (Detection neural network, Neural network file)
- If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network are displayed (Detection neural network, Neural network file)
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
Info title Note Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks aren't displayed (Detection neural network, Neural network file).
By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enable the parameter, it increases the load on the CPU
Note | ||
---|---|---|
| ||
The Similitude search works only on tracks of people. |
Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
Note | ||
---|---|---|
| ||
|
By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.
For example:
The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo doors open
Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS
Note | ||
---|---|---|
| ||
|
Starting with Detector Pack 3.14, you can add the DISABLE_CALC_HSV system variable to determine the object's color (see Appendix 9. Creating system variable). You can set the following values for the variable:
- 0—color detection is enabled. The system will collect data about the object's color. This data is necessary for further search in the archive by color.
- 1—color detection is disabled. Disabling color determination reduces the load on the CPU, including when the detector runs on GPU.
By default, the entire frame is a detection area. If necessary, you can set detection areas (see Configuring a detection area).
Info | ||
---|---|---|
| ||
|
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Neural tracker is complete. If necessary, you can create and configure the necessary sub-detectors on the basis of the neural tracker (see Standard sub-detectors).
...