...
- Go to
the Detection Tools tab- the Detectors tab.
Below the required camera,
click click Create… → Category: Trackers →
Neurotracker Neural tracker.
Parameter | Value | Description |
---|
Object features |
Record objects tracking | Yes | By default, metadata are recorded into the database. To disable metadata recording, select the No value |
No |
Video stream | Main stream | If the camera supports multistreaming, |
select select the stream for which detection is needed |
Other |
Enable | Yes | By default, the |
detection tool detector is enabled. To disable, select the No |
valueNeurotracker detection tool detector name or leave the default name |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources |
will be are used for decoding
|
CPU |
GPU |
HuaweiNPU |
Neurofilter mode | CPU | Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools) Note |
---|
| - We recommend using the GPU. It may take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
- Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
- Starting with Detector Pack 3.12, the parameter is removed from the detection tool settings, and Neurofilter runs on the same processor as Neurotracker. If before Detector Pack update, you select a different processor in the Neurofilter mode parameter, after the update, the detection tool works without Neurofilter.
|
|
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Intel NCS (not supported) |
Intel HDDL (not supported) |
Intel GPU |
Huawei NPU |
Number of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016; 100] Note |
---|
| We recommend the value of at least 6 FPS. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above. |
|
Type | Neurotracker | Name of the detection tool type (non-editable field) |
Advanced settings
|
Camera positionWall | To sort out false events from the detection tool when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
| Ceiling |
Hide moving objectsYes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes valueNumber of frames processed per second | 6 | Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the range [0.016, 100]
Note |
---|
| We recommend the value of at least 6 FPS. For fast-moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above. |
|
Type | Neural tracker | Name of the detector type (non-editable field) |
Advanced settings
|
Camera position | Wall | To sort out false events from the detector when using a fisheye camera, select the correct device location. For other devices, this parameter is irrelevant
|
Ceiling |
Hide moving objects | Yes | By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime Note |
---|
| If a static object starts moving, the detector creates a track, and the object is no longer considered static. |
|
No |
Hide static objects | Yes | Starting with Detector Pack 3.14, the parameter is disabled by default. If you need to hide static objects, select the Yes value. This parameter lowers the number of false events from the detector when detecting moving objects. An object is considered static if it |
doesn change its position moved more than 10% of its width or height during the whole time of its track |
lifetimeexistence Note |
---|
| - If a static object starts moving, the
|
|
detection tool will create - detector creates a track, and the object
|
|
will be - considered static.
- Disabling the parameter reduces the load on the CPU.
|
|
No |
Hide static objectsYes | By default, the parameter is disabled. If you don't need to detect static objects, select the Yes value. This parameter lowers the number false events from the detection tool when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence. Note |
---|
| If a static object starts moving, the detection tool will create a track, and the object will no longer be considered static. |
| No |
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the neurotracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detection tool. The value must be in the range [2, 100] |
Model quantizationYes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPU. It allows you to reduce the consumption of computation power. The neural network is selected automatically depending on the value selected in the Object type parameter. To quantize the model, select the Yes value
Minimum number of detection triggers | 6 | Specify the Minimum number of detection triggers for the Neural tracker to display the object's track. The higher the value, the longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter can lead to false events from the detector. The value must be in the range [2, 100] |
Model quantization
| Yes | By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPUs. It allows you to reduce the consumption of computation power. The neural network is selected automatically, depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value
Note |
---|
| AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the |
|
Note |
---|
|
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%. |
|
detection tool - detector with the Model quantization parameter enabled can take longer than a standard launch.
|
|
If detection tool - detector with quantization
|
|
will run |
No |
Neural network file | | If you use a custom neural network, select the corresponding file.
Note |
---|
| - To train your neural network, contact AxxonSoft (
|
|
see is not specified will be , which - that is selected automatically, depending on the selected
|
|
object type (Object type) - value in the Detection neural network parameter and the selected processor for the neural network operation
|
|
() If - If you use a custom neural network, enter a path to the file. The selected
|
|
object type - detection neural network is ignored when you use a custom neural network.
|
|
To ensure the correct operation of the neural network - You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
- For correct neural network operation on Linux OS, place the corresponding file
|
|
must be located - locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory
|
|
. - or in the network folder with the corresponding access rights.
|
|
Scanning |
window paraneter To Scanning scanning mode)
|
No |
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments |
will line are lined up one after another. Reducing the height or width of the scanning step |
will increase increases the number of windows due to their overlapping each other with an offset. This |
will increase increases the detection accuracy |
, will can also increase the load on the CPU |
load.
Note |
---|
| The height and width of the scanning step |
|
must not mustn't be greater than the height and width of the scanning |
|
window—the detection tool will not operate with such settings.Scanning window step width | 0window, since the detector doesn't operate with such settings. |
|
Scanning window width | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
Selected object class | | If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object type, Neural network file).If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object type, Neural network file).Scanning window step width | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window, respectively, the segments are lined up one after another. Reducing the height or width of the scanning step increases the number of windows due to their overlapping each other with an offset. This increases the detection accuracy but can also increase the load on the CPU
Note |
---|
| The height and width of the scanning step mustn't be greater than the height and width of the scanning window, since the detector doesn't operate with such settings. |
|
Selected object classes | | If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10 The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle - If you leave the field blank, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
|
If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object type, Neural network file).- If you specify a class/classes
|
missing - from the neural network, the tracks of
|
all available classes from the neural network will be displayed (Object type- the specified class/classes are displayed (Detection neural network, Neural network file)
|
Info |
---|
|
Starting with Detector Pack 3.10.2, if you specify - If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks
|
won’t be displayed (Object type- of a class/classes from the neural network are displayed (Detection neural network, Neural network file)
|
.If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Detection neural network, Neural network file)
Info |
---|
| Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks aren't displayed (Detection neural network, Neural network file). |
|
Sensitivity of excluding static objects (starting with Detector Pack 3.14) | 25 | Specify the level of sensitivity of excluding static objects. The higher the value, the less sensitive to motion the algorithm becomes |
Similitude searchYes | By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enabled the parameter, it increases the processor load
| No |
Time of processing similitude track (sec) | 0 | Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600] |
Time period of excluding static objects | 0 | Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range |
86 400Track retention time | 0.7 | Specify the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, when a larger vehicle completely blocks the smaller one from view. Similitude search
| Yes | By default, the parameter is disabled. To enable the search for similar persons, select the Yes value. If you enable the parameter, it increases the load on the CPU
|
No |
Time of processing similitude track (sec) | 0 | Specify the time in seconds for the algorithm to process the track to search for similar persons. The value must be in the range |
.3 1000Basic settings
|
Detection threshold | Time period of excluding static objects | 0 |
30 Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detection tool may not be consideredtime in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the range [0 |
.05 100Neurotracker modeCPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools)
Note |
---|
|
- We recommend using the GPU. It may take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
- If neurotracker is running on GPU, object tracks can be lagging behind the objects in the Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see The Camera object).
- Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
|
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Intel NCS (not supported) |
Intel HDDL (not supported) |
Intel GPU |
Huawei NPU |
Object typePerson | Select the recognition object
- Nano—low accuracy, low processor load.
- Medium—medium accuracy, medium processor load.
- Large—high accuracy, high processor load.
Person (top-down view) |
Person (top-down view Nano) |
Person (top-down view Medium) |
Person (top-down view Large) |
Vehicle |
Person and vehicle (Nano) |
Person and vehicle (Medium) |
Person and vehicle (Large) |
Neural network filter
|
Neurofilter | Yes | By default, the parameter is disabled. To sort out parts of tracks, select the Yes value. For example: Neurotracker detects all freight trucks, and the neurofilter sorts out only the tracks that contain trucks with cargo door open |
No |
Neurofilter file | | Select a neural network file
Note |
---|
|
Starting with Detector Pack 3.12, the neural network file of the Neurofilter must match the processor type specified in the Neurotracker mode parameterTrack lifespan (starting with Detector Pack 3.14)
| Yes | By default, the parameter is disabled. If you want to display the track lifespan for an object in seconds, select the Yes value
|
No |
Track retention time | 0.7 | Specify the time in seconds after which the object track is considered lost. This helps if objects in the scene temporarily overlap each other. For example, when a larger vehicle completely blocks the smaller one from view. The value must be in the range [0.3, 1000] |
Basic settings
|
Detection threshold | 30 | Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the detection quality, but some events from the detector may not be considered. The value must be in the range [0.05, 100] |
Neural tracker mode
| CPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors)
Note |
---|
| - We recommend using the GPU. It can take several minutes to launch the algorithm on an Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU in Windows OS).
- If the neural tracker is running on the GPU, object tracks can lag behind the objects in the Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see Camera).
- Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.
- Starting with Detector Pack 3.14, Intel Multi-GPU and Intel GPU 0-3 are supported.
|
|
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Intel NCS (not supported) |
Intel HDDL (not supported) |
Intel GPU |
Intel Multi-GPU |
Intel GPU 0 |
Intel GPU 1 |
Intel GPU 2 |
Intel GPU 3 |
Huawei NPU |
Detection neural network
| Person | Select the detection neural network from the list. By default, the Person detection neural network is selected. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition
|
Person (top-down view) |
Person (top-down view Nano) |
Person (top-down view Medium) |
Person (top-down view Large) |
Vehicle |
Person and vehicle (Nano) |
Person and vehicle (Medium) |
Person and vehicle (Large) |
Neural network filter
|
Neural filter
| Yes | By default, the parameter is disabled. To sort out parts of tracks, select the Yes value. For example: The Neural tracker detects all freight trucks, and the Neural filter sorts out only the tracks that contain trucks with cargo doors open |
No |
Neural filter file | | Select a neural network file. You must place the neural network file locally, that is, on the same server where you install Axxon One. You cannot specify the network file in Windows OS
Note |
---|
| - Starting with Detector Pack 3.12, the neural network file of the neural filter must match the processor type specified in the Neural tracker mode parameter.
- If you use a standard neural network (training wasn't performed in operating conditions), we guarantee an overall accuracy of 80-95% and a percentage of false positives of 5-20%. The standard neural networks are located in the C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK directory.
|
|
...