Go to documentation repository
Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.
Previous page Next page
To create the detection tool, do the following:
- Go to the Detection Tools tab.
Below the required camera, click Create… → Category: Retail → Crowd Estimation VA.
By default, the detection tool is enabled and set to count crowds of people.
If necessary, you can change the settings of the detection tool parameters given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Record mask to archive | Yes | To record the sensitivity scale of the detection tool to the archive (see Displaying information from a detection tool (mask)), select the Yes value |
No | ||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed |
Other | ||
Enable | Yes | By default, the detection tool is enabled. To disable, select the No value |
No | ||
Name | Crowd Estimation VA | Enter the detection tool name or leave the default name |
Decode key frames | Yes | By default, the Decode key frames parameter is enabled. In this case only key frames are decoded. To disable the parameter, select the No value in the corresponding field. This parameter reduces the load on the Server, but at the same time the detection quality decreases. We recommend enabling this parameter for "blind" (without video image display) Servers on which you want to perform detection. For the MJPEG codec decoding isn’t relevant, as each frame is considered as a key frame. Attention! The Number of frames processed per second and Decode key frames parameters are interrelated. If a local Client isn’t connected to the Server, the following rules work for remote Clients:
If a local Client is connected to the Server, the detection tool will always work according to the set period. After disconnecting the local Client, the above rules will be relevant again. |
No | ||
Decoder mode | Auto | Select a processing resource for decoding video. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding |
CPU | ||
GPU | ||
HuaweiNPU | ||
Mode | CPU | Select the processor for the operation—CPU or NVIDIA GPU (see General information on configuring detection). Attention!
|
Nvidia GPU 0 | ||
Number of alarm objects | 1 | Specify the number of objects that will trigger the detection. The value must be in the range [0; 10000] |
Number of frames processed per second | 0.25 | Specify the number of frames for the detection tool to process per second. The value must be in the range [0.016; 100] |
Trigger upon count | Greater than or equal to threshold value | Select when you want to generate a trigger. The Crowd Estimation VA will generate a trigger from the threshold value set in the Number of alarm objects field |
Less than or equal to threshold value | ||
Type | Crowd Estimation VA | Name of the detection tool type (non-editable field) |
If necessary, in the preview window, set detection areas with the help of anchor points (the same as with the excluded areas of the Scene analytics detection tools, see Setting General Zones for Scene analytics detection tools). By default, the whole frame is a detection area.
To save the parameters of the detection tool, click the Apply button.
To cancel the changes, click the Cancel button.
It is possible to display the sensor and the number of objects in the monitored area in the Surveillance window on the layout (see Displaying the number of detected objects).