Go to documentation repository
Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.
Previous page Next page
Hardware requirements for neural analytics operation
Video stream and scene requirements for the Abandoned object detector VI
Image requirements for the Abandoned object detector VI
Licensing of the software module for the Abandoned object detector VI in Windows OS
Licensing of the software module for the Abandoned object detector VI in Linux OS
To configure the Abandoned object detector VI, do the following:
- Go to the Detectors tab.
Below the required camera, click Create… → Category: Trackers → Abandoned object detector VI.
By default, the detector is enabled and set to detect abandoned objects.
If necessary, you can change the detector parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed |
Other | ||
Enable | Yes | By default, the detector is enabled. To disable, select the No value |
No | ||
Name | Abandoned object detector VI | Enter the detector name or leave the default name |
Background accumulation | Yes | By default, scene background accumulation is enabled. To disable, select the No value |
No | ||
Background training period (sec) (starting with Detector Pack 3.14) | 25 | Specify the time period in seconds during which the background is analyzed. The value must be in the range [-1, 10000000] |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources are used for decoding |
CPU | ||
GPU | ||
HuaweiNPU | ||
Detection area sensitivity | Medium level | Select the detection area sensitivity. The Medium level of sensitivity is selected by default. Additional neural network processing is applied in this area |
Low level | ||
Detection mode | CPU | Select the processor for the detector operation (see Selecting Nvidia GPU when configuring detectors) |
Nvidia GPU 0 | ||
Nvidia GPU 1 | ||
Nvidia GPU 2 | ||
Nvidia GPU 3 | ||
Detection sensitivity (%) | 70 | Specify the sensitivity level of the detector as a percentage. The value must be in the range [0, 100] |
Error tolerance (starting with Detector Pack 3.14) | 128 | Specify the tolerance level. The lower the value, the higher the recognition accuracy, but the longer the processing time. The value must be in the range [32, 256] |
Event intersection time (sec) | 0.01 | Specify the time interval in seconds during which events aren't processed after successful detection when the event results match. The parameter determines the level of intersection between new and previous events. If the specified value is exceeded, it is considered that these events refer to the same real object. The parameter is necessary to reduce the total number of events, since the analysis is performed continuously and the duration of real events can be several seconds. You must select the required value empirically. The value must be in the range [0, 1] |
False detection filtering algorithm (starting with Detector Pack 3.14) | Accurate | Select the false detection filtering algorithm |
Quick | ||
False detection filtering level (starting with Detector Pack 3.14) | 8 | Specify the level of filtering for monotonous and single-color triggers caused by changes in lighting. The value must be in the range [0, 100] |
Foreground sensitivity | 70 | Specify the sensitivity of primary detection of objects in the foreground. The value must be in the range [0, 100] |
Frame picking period (starting with Detector Pack 3.14) | 67 | Specify the time period in seconds between selection of frames for analysis. The value must be in the range [1,10000000] |
Frame size change | 640 | Specify the size to which the video is compressed before analysis. The value must be in the range [480, 960] |
Ignore time (sec) | 0 | Specify the time interval in seconds during which events aren't processed after successful detection. The parameter is necessary to reduce the total number of events, since the analysis is performed continuously and the duration of real events can be several seconds. You must select the required value empirically. The value must be in the range [0, 9999] |
Maximum size of detected objects | 100 | Specify the maximum size of an object as a percentage to get an event from the detector. If objects are larger than specified, there are no events from the detector. The value must be in the range [0, 100] |
Minimum size of detected objects | 0 | Specify the minimum size of an object as a percentage to get an event from the detector. If objects are smaller than specified, there are no events from the detector. The value must be in the range [0, 100] |
Monotony filter | 1 | Specify the value of monotonous/single-color events filtering (for example, spots on the floor). The value must be in the range [0, 10] |
Number of execution threads | 4 | Specify the number of execution threads. The value must be in the range [0, 100] |
Number of frames processed per second | 12 | Specify the number of frames processed per second. The value must be in range [0.016, 100] Attention! The detector operates optimally with the default value for most use cases. Changing this value may reduce the detector's performance quality. |
Object filter | Yes | By default, the neural network object filter is disabled. To enable, select the Yes value |
No | ||
People flow intensity (%) | 99 | Specify the value as a percentage that is responsible for the detector's ability to detect objects when the camera FOV is heavily overlapped by people walking by. The value must be in the range [0, 100] |
Person filter | Yes | By default, objects near people aren't ignored. To ignore objects near people, select the Yes value |
No | ||
Saving background | Yes | By default, saving the scene background is enabled. To disable, select the No value Attention! Background files are automatically saved to the directory C:\ProgramData\VideoIntellect\. The size of this folder can quickly increase, potentially filling up all available space on the C drive. It isn't possible to set a limit on the maximum volume of this directory, so it’s important to periodically check its contents and clear it if necessary. |
No | ||
Scene training period (sec) (starting with Detector Pack 3.14) | -1 | Specify the time period during which the scene is analyzed. With the -1 value, the system automatically selects the time period. The value must be in the range [-1, 10000000] |
Sensitivity of slight changes in background (%) | 95 | Controls the sensitivity to object contrast. Specify the value as a percentage that is responsible for the detector's ability to distinguish between objects that are less visible (that blend into the background). The higher the value, the less noticeable object can be detected. The value must be in the range [0, 100] |
State saving interval | 0 | Specify the state saving interval in seconds. The value must be in the range [0, 9999] |
Time for detection (sec) | 120 | Specify the time in seconds after which the object is considered abandoned. The value must be in the range [0, 9999] |
Type | Abandoned object detector VI | Name of the detector's type (non-editable field) |
In the preview window, specify the area in which the objects must be detected. You can set this area by using the anchor points Configuring a detection area). (see
Note
- For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again.
- The detection area is displayed by default. To hide it, click the Configuring a detection area). To cancel the action, click this button again. button (see
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuring the Abandoned object detector VI is complete.