Go to documentation repository
Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.
To configure the Object tracker detection tool, do the following:
- Go to the Detection Tools tab.
Below the required camera, click Create… → Category: Trackers → Object tracker.
By default, the detection tool is enabled and set to detect moving objects in the frame, on the basis of which their tracks are created.
Some parameters are set for all sub-detectors of the Object tracker simultaneously (see Recommendations for configuring the Object tracker and its sub-detectors).
If necessary, you can change the detection tool parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Record objects tracking | Yes | By default, metadata is recorded into the database. To disable metadata recording, select the No value. Attention! To obtain metadata, the video is decompressed and analyzed, which results in a heavy load on the server and limits the number of video cameras that can be used on it. |
No | ||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed. Selecting a low quality video stream allows you to reduce the server load. Attention! To ensure the correct display of streams on a multi-stream camera, all video streams must have the same frame aspect ratio. |
Other | ||
Enable | Yes | By default, the parameter is enabled. To disable, select the No value |
No | ||
Name | Object tracker | Enter the detection tool name or leave the default name |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding |
CPU | ||
GPU | ||
HuaweiNPU | ||
Type | Object tracker | Name of the detection tool type (non-editable field) |
Neural network filter | ||
Enable filter | Yes | By default, the neural network filter is disabled. To enable a neural network filter to filter out parts of tracks, set the value to Yes (see Hardware requirements for neural analytics operation). For example, a neural network filter can process the results of the tracker and filter out false positives on a complex video image (foliage, glare, etc.). Attention! A neural network filter can be used either only for the analysis of moving objects or only for the analysis of abandoned objects. You cannot use two neural network filters simultaneously. |
No | ||
Moving object filter mode | CPU | Select the processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors). Attention!
|
Nvidia GPU 0 | ||
Nvidia GPU 1 | ||
Nvidia GPU 2 | ||
Nvidia GPU 3 | ||
Intel GPU | ||
Huawei NPU | ||
Abandoned object filter mode | CPU | |
Nvidia GPU 0 | ||
Nvidia GPU 1 | ||
Nvidia GPU 2 | ||
Nvidia GPU 3 | ||
Intel GPU | ||
Huawei NPU | ||
Moving object filter file | Select the required neural network. To obtain a neural network, contact AxxonSoft technical support. If the neural network file is not selected or selected incorrectly, the filter will not work | |
Abandoned object filter file | ||
Basic settings | ||
Long-time abandoned object detection | Yes | By default, the parameter is disabled. To enable, select the Yes value. Note If you enable this and the Enable filter parameter, it can reduce the number of false positives during detection. |
No | ||
Abandoned object detection | Yes | By default, the parameter is disabled. To enable, select the Yes value. Note Objects abandoned for 10 seconds or longer will be detected. |
No | ||
Max. object height | 100 | Set the maximum height and width of the detected object as a percentage of the frame size. The value must be in the range [0.05, 100]. Attention! If the Object calibration parameter is enabled in the tracker settings, then the maximum height and width of objects is set in decimeters and not as a percentage of the frame size. |
Max. object width | 100 | |
Alarm on object's max. idle time in area | 60 | Specify the time in seconds—if the object remains idle for the time longer than the specified, it will be detected. This value must be in the range [15, 1800]. Note This parameter is used only for the Long-time abandoned object detection. It is recommended to select the parameter value starting from 15. |
Min. object height | 2 | Set the minimum height and width of the detected object as a percentage of the frame size. The value must be in the range [0.05, 100]. Attention! If the Object calibration parameter is enabled in the tracker settings, then the minimum height and width of objects is set in decimeters and not as a percentage of the frame size. |
Min. object width | 2 | |
Motion detection sensitivity | 25 | Set the sensitivity of the motion sub-detectors in percentage. The higher the sensitivity, the more subtle change in the frame will be detected. The value must be in the range [0, 100] |
Abandoned object detection sensitivity | 9 | Set the sensitivity for abandoned object detection and long-time abandoned object detection in the range [1, 100]. Note This parameter depends on the lighting conditions and must be chosen empirically. It is recommended to select the parameter value starting from 20. |
Advanced settings | ||
Auto sensitivity | Yes | By default, the parameter is enabled. To disable automatic adjustment of the sensitivity of the Object tracker sub-detectors, select the No value. Note It is recommended to enable this parameter if the lighting changes significantly in the camera FOV (for example, if the camera operates outdoors). |
No | ||
Leveling rod height | 20 | Set the height of the calibration object in decimeters. The value must be in the range [1, 100] |
Frame size change | 1280 | By default, during the analysis, the frame is compressed to the specified size (by default, 1280 pixels on the larger side). The following algorithm is used:
Note For example, the original video resolution is 2048x1536, the specified value is 1000. In this case, the original resolution will be divided in half (512x384) twice, because after the first division, the value on the larger side of the frame will be greater than the specified value (1024 > 1000). If a higher resolution stream is used for detection and there are detection errors, it is recommended to reduce compression. |
Object calibration | Yes | By default, the parameter is disabled (see Configuring perspective). To estimate the real size of an object based on a simplified calibration system, select the Yes value |
No | ||
Camera position | Wall | If you use a fish-eye camera, choose the correct location of the device to filter out false events. The default value is Wall. This parameter is not relevant for all other devices |
Ceiling | ||
Antishaker | Yes | By default, the parameter is disabled. To reduce camera shake, select the Yes value. It is recommended to use this parameter only when there is significant camera shake |
No |
By default, the entire frame is a detection area. If necessary, in the preview window, set:
- one or more detection areas (see Configuring a detection area),
- one or more skip areas (see Configuring a skip area).
Note
- For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again.
- The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again.
To save the parameters of the detection tool, click the Apply button. To cancel the changes, click the Cancel button.
The Object tracker detection tool is configured. General parameters will be set for all its sub-detectors. If necessary, you can create and configure the necessary sub-detectors on the basis of the Object tracker (see Abandoned object, Standard sub-detectors).