Go to documentation repository
Page History
To configure the Abandoned object detector VI, do the following:
- Go to the Detection ToolsDetectors tab.
Below the required camera, click Create… → Category: Trackers → Abandoned object detector VI.
By default, the detection tool detector is enabled and set to detect abandoned objects.
If necessary, you can change the detection tool detector parameters. The list of parameters is given in the table:
Parameter | Value | Description | |||||
---|---|---|---|---|---|---|---|
Object features | Record objects tracking | YesThe metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value | |||||
No | |||||||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed | |||||
Other | |||||||
Enable | Yes | By default, the detection tool detector is enabled. To disable, select the No value | |||||
No | |||||||
Name | Abandoned object detector VI | Enter the detection tool detector name or leave the default name | |||||
Background accumulation | Yes | By default, scene background accumulation is enabled. To disable, select the No value | |||||
No | |||||||
Background | overrideYes | By default, background override is disabled. Background override is used for PTZ cameras. For fixed cameras, the parameter must be disabled | training period (sec) (starting with Detector Pack 3.14) | 25 | Specify the time period in seconds during which the background is analyzed. The value must be in the range [-1, 10000000] | ||
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be are used for decoding | |||||
CPU | |||||||
GPU | |||||||
HuaweiNPU | |||||||
Detection area sensitivity | Medium level | Select the detection area sensitivity. The Medium level of sensitivity is selected by default. Additional neural network processing is applied in this area | |||||
Low level | |||||||
Detection mode | CPU | Select the processor for the detection tool detector operation (see see Selecting Nvidia GPU when configuring detectors) | |||||
Nvidia GPU 0 | |||||||
Nvidia GPU 1 | |||||||
Nvidia GPU 2 | |||||||
Nvidia GPU 3 | |||||||
Detection sensitivity (%) | 70 | Specify the sensitivity level of the detection tool detector as a percentage. The value must be in the range [0, 100] | |||||
Error tolerance (starting with Detector Pack 3.14) | 128 | Specify the tolerance level. The lower the value, the higher the recognition accuracy, but the longer the processing time. The value must be in the range [32, 256] | |||||
Event intersection time (sec) | 0.01 | Specify the time interval in seconds during which events wonaren't be processed after successful detection when the event results match. The parameter determines the level of intersection between new and previous events. If the specified value is exceeded, it is considered that these events refer to the same real object. The parameter is necessary to reduce the total number of events, since the analysis is performed continuously and the duration of real events can be several seconds. You must select the required value empirically. The value must be in the range [0, 1] | |||||
False detection filtering algorithm (starting with Detector Pack 3.14) | Accurate | Select the false detection filtering algorithm | |||||
Quick | |||||||
False detection filtering level (starting with Detector Pack 3.14) | 8 | Specify the level of filtering for monotonous and single-color triggers caused by changes in lighting | Frame size change | 640 | Specify the size to which the video is compressed before analysis. The value must be in the range [4800, 960100] | ||
Foreground sensitivity | 70 | Specify the sensitivity of primary detection of objects in the foreground. The value must be in the range [0, 100] | |||||
Frame picking period (starting with Detector Pack 3.14) | 67 | Specify the time period in seconds between selection of frames for analysis. The value must be in the range [1,10000000] | |||||
Frame size change | 640 | Specify the size to which the video is compressed before analysis | Highlight and shadow filter | 0.15 | Specify the value of highlight and shadow filtering. The value must be in the range [0480, 10960] | ||
Ignore time (sec) | 0 | Specify the time interval in seconds during which events wonaren't be processed after successful detection. The parameter is necessary to reduce the total number of events, since the analysis is performed continuously and the duration of real events can be several seconds. You must select the required value empirically. The value must be in the range [0, 9999] | |||||
Maximum size of detected objects | 100 | Specify the maximum size of an object as a percentage to get an event from the detection tooldetector. If objects are larger than specified, there won't be any are no events from the detection tooldetector. The value must be in the range [0, 100] | |||||
Minimum size of detected objects | 0 | Specify the minimum size of an object as a percentage to get an event from the detection tooldetector. If objects are smaller than specified, there won't be any are no events from the detection tooldetector. The value must be in the range [0, 100] | |||||
Monotony filter | 0.151 | Specify the value of monotonous/single-color events filtering (for example, spots on the floor). The value must be in the range [0, 10] | |||||
Number of execution threads | 4 | Specify the number of execution threads. The value must be in the range [0, 100] | |||||
Number of frames processed per second | 12 | Specify the number of frames processed per second. Value should The value must be in range range [0.016, 100]
| |||||
Object filter | Yes | By default, the neural network object filter is disabled. To enable, select the Yes value | |||||
No | |||||||
People flow intensity (%) | 99 | Specify the value as a percentage that is responsible for the detection tool detector's ability to detect objects when the camera FOV is heavily overlapped by people walking by. The value must be in the range [0, 100] | |||||
Person filter | Yes | By default, objects near people aren't ignored. To ignore objects near people, select the Yes value | |||||
No | |||||||
Saving background | Yes | By default, saving the scene background is enabled. To disable, select the No value
| |||||
No | |||||||
Scene training period (sec) (starting with Detector Pack 3.14) | -1 | Specify the time period during which the scene is analyzed. With the -1 value, the system automatically selects the time period. The value must be in the range [-1, 10000000] | |||||
Sensitivity of slight changes in background (%) | 95 | Controls the sensitivity to object contrast. Specify the value as a percentage that is responsible for the detection tool detector's ability to distinguish between objects that are less visible (that blend into the background). The higher the value, the less noticeable object can be detected. The value must be in the range [0, 100] | |||||
State saving interval | 0 | Specify the state saving interval in seconds. The value must be in the range [0, 9999] | |||||
Time for detection (sec) | 120 | Specify the time in seconds after which the object is considered abandoned. The value must be in the range [0, 9999] | |||||
Type | Abandoned object detector VI | Name of the detection tool detector's type (non-editable field) |
In the preview window, you can specify the detection areas area in which the objects must be detected. You can set this area by using the anchor points (see Configuring a detection area).
Info | ||
---|---|---|
| ||
|
To save the parameters of the detection tooldetector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Abandoned object detector VI is complete.