Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.

Previous page Activating the license for the Human tracker VL in Linux OS  Configuring the Human tracker VL sub-detectors Next page

To configure the Human tracker VL, do the following:

  1. Go to the Detection Tools tab.

  2. Below the required camera, click Create… → Category: TrackersHuman tracker VL.

By default, the detection tool is enabled and set to detect people in the frame and generate events.

If necessary, you can change the detection tool parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Video streamMain stream

If the camera supports multistreaming, select the stream for which detection is needed.

Attention!

To ensure the correct display of streams on a multi-stream camera, all video streams must have the same frame aspect ratio.

Record objects trackingYes

The metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value.

Attention!

To obtain metadata, the video is decompressed and analyzed, which results in a heavy load on the Server and limits the number of video cameras that can be used on it.


No
Other
EnableYesThe Human tracker VL detection tool is enabled by default. To disable the detection tool, select the No value
No
NameHuman tracker VLEnter the detection tool name or leave the default name

Decoder mode

Auto

Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding

CPU
GPU
HuaweiNPU
TypeHuman tracker VL

Name of the detection tool type (non-editable field)

Advanced settings

Frame size change








640

Select the resolution to which the video will be compressed before analysis


 

 

 

 

 

 

 

 

960
1280
1920
2560
3840
5120
7680
15360
Number of frames without detections18

Specify the number of frames without objects detection. If no object is detected in the designated area, the neural network algorithm will continue to process the specified number of frames before it considers the track lost and terminated. The value must be in the range [1, 10 000]

Number of frames between detections

7

Specify the number of frames between detections. The lower the value, the higher the probability that the neural network algorithm will detect a new object as soon as it appears in the designated area. The value must be in the range [1, 10 000]

Minimum number of detections

1

Specify the minimum number of detections after which a track will be considered a detected object. The value must be in the range [1, 10 000]

Basic settings

Minimum threshold of object authenticity 

60

Specify the minimum threshold of object authenticity in percent, at which an object is considered detected. The higher the value, the fewer objects are detected, but the more reliable the detected objects are. The value must be in the range [1, 100]

Mode


 

 

 

 

 

CPU

Select a processor for the detection tool operation (see Selecting Nvidia GPU when configuring detectors).

Attention!

If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run the detection tool.


Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Huawei NPU

If necessary, in the preview window, set an area of the frame, in which you want to detect objects. You can specify the area by moving the anchor points  (see Configuring the Detection Zone).

Note

  • For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again.
  • To hide detection area, click the button. To cancel the action, click this button again.

To save the parameters of the detection tool, click the Apply  button. To cancel the changes, click the Cancel  button.

The Human tracker VL is configured. If necessary, you can create and configure the necessary sub-tools on the basis of the Human tracker VL (see Standard detection sub-tools).

  • No labels