Go to documentation repository
Page History
Tip |
---|
Video stream and scene requirements for the Crowd estimation VA |
To configure the Crowd estimation VATo create the detection tool, do the following:
- Go to the Detection Tools
- Detectors tab.
Below the required camera, click Create… → Category: Retail → Crowd
Estimation estimation VA.
By default, the detection tool detector is enabled and set to count crowds of people.
If necessary, you can change the settings of the detection tool detector parameters given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Record mask to archive | Yes | By default, the recording of the mask to the archive is disabled. To record the sensitivity scale of the |
detector to the archive (see Displaying information from a |
detector (mask)), select the Yes value | ||
No | ||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed |
Other | ||
Enable | Yes | By default, the |
detector is enabled. To disable, select the No value | |
No | |
Name | Crowd |
estimation VA | Enter the |
detector name or leave the default name |
By default, the Decode key frames parameter is enabled. In this case only key frames are decoded. To disable the parameter, select the No value in the corresponding field.
This parameter reduces the load on the Server, but at the same time the detection quality decreases. We recommend enabling this parameter for "blind" (without video image display) Servers on which you want to perform detection.
For the MJPEG codec decoding isn’t relevant, as each frame is considered as a key frame.
Note | ||
---|---|---|
| ||
The Number of frames processed per second and Decode key frames parameters are interrelated. If a local Client isn’t connected to the Server, the following rules work for remote Clients:
If a local Client is connected to the Server, the detection tool will always work according to the set period. After disconnecting the local Client, the above rules will be relevant again. |
Decoder mode | Auto | Select a processing resource for decoding video. |
When you select a GPU, a stand-alone graphics card takes priority (when decoding with |
Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding | ||
CPU | ||
GPU | ||
HuaweiNPU | ||
Mode | CPU | Select |
a processor for the |
operation (see |
Selecting Nvidia GPU when configuring detectors)
|
| |
Nvidia GPU 0 | |
Nvidia GPU 1 | |
Nvidia GPU 2 | |
Nvidia GPU 3 |
Number of alarm objects | 1 | Specify the number of objects |
at which an event occurs. The value must be in the range [0 |
, 10000] | |
Number of frames processed per second | 0. |
017 | Specify the number of frames for the |
detector to process per second. The value must be in the range [0.016 |
, 100] | |
Trigger upon count | Greater than or equal |
to threshold value | Select when you want to generate |
events. |
The Crowd |
estimation VA will generate |
events from the threshold value set in the Number of alarm objects field | |
Less than or equal to threshold value | |
Change in readings | |
Type | Crowd |
estimation VA | Name of the |
detector type (non-editable field) |
If necessary, in By default, the entire frame is the detection area. In the preview window, set you can specify the detection areas with using the help of anchor points points (the same as with the excluded areas of the Scene analytics detection tools, see Setting General Zones for Scene analytics detection tools). By default, the whole frame is (see Configuring a detection area).
To save the parameters of the detection tooldetector, click the Apply button. To cancel the changes, click the Cancel button.
It is possible to display the sensor and the number of objects in the monitored area in the Surveillance window on the layout (see Displaying the number of detected objects).