Video stream and scene requirements for the Neural classifier |
To configure the Object Presence Detection, do the following:
By default, the detection tool is enabled and set to detect objects in the frame.
If necessary, you can change the detection tool parameters. The list of parameters is given in the table:
Parameter | Value | Description | |
---|---|---|---|
Object features | |||
Record mask to archive | Yes | By default, the sensitivity scale of the detection tool is recorded to the archive (see Displaying information from a detection tool (mask)). To disable the parameter, select the No value | |
No | |||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed. Selecting a low quality video stream reduces the load on the Server | |
Other | |||
Enable | Yes | The detection tool is enabled by default. To disable the detection tool, select the No value | |
No | |||
Name | Object Presence Detection | Enter the detection tool name or leave the default name | |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding | |
CPU | |||
GPU | |||
HuaweiNPU | |||
Number of frames processed per second | 0.1 | Specify the number of frames that the detection tool will process per second. The value must be in the range [0.016; 100] | |
Selected object classes | If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
| ||
Type | Object Presence Detection | Name of the detection tool type (non-editable field) | |
Advanced settings | |||
Neural network file | Specify the path to the neural network file
| ||
Number of measurements in a row to trigger detection | 5 | Specify the minimum number of frames on which the detection tool must detect an object to generate an event. The value must be in the range [5; 20] | |
Scanning mode | Yes | The parameter is disabled by default. To detect objects without changing the frame size, select the Yes value. To work in the scanning mode, the neural network must support the scanning mode | |
No | |||
Basic settings | |||
Mode | CPU | Select a processor for the neural network operation (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors).
| |
Nvidia GPU 0 | |||
Nvidia GPU 1 | |||
Nvidia GPU 2 | |||
Nvidia GPU 3 | |||
Intel GPU | |||
Huawei NPU | |||
Sensitivity | 33 | Specify the sensitivity of the detection tool empirically. The value must be in the range [1; 99]. The preview window displays the sensitivity scale of the detection tool that relates to the sensitivity parameter. If the scale is green, object isn't detected. If the scale is yellow, object is detected, but not enough to generate an event. If the scale is red, object is detected and the detection tool will generate an eevent, if the scale is red through the sampling period (50 seconds by default).
|
By default, the entire frame is a detection area. In the preview window, you can specify the detection areas using the anchor points (see Configuring a detection area):
You must select the detection area (polygon or rectangle) experimentally. For some neural networks the quality of detection will be better with rectangle, for others—with polygon. |
|
To save the parameters of the detection tool, click the Apply button. To cancel the changes, click the Cancel
button.