Section |
---|
Column |
---|
| Panel |
---|
borderColor | #CCCCCC |
---|
bgColor | #FFFFFF |
---|
titleBGColor | #F0F0F0 |
---|
borderStyle | solid |
---|
title | On the page: |
---|
|
|
|
|
Configuring the detection tool
To configure Neurocounter, do the following:
- Go to the Detection Tools tab.
Below the required camera, click Create… → Category: Retail → Neurocounter.
By default, the detection tool is enabled and set to count the number of objects in a specified area using neural network.
If necessary, you can change the detection tool parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|
Object features |
Record mask to archive | Yes | By default, the recording of the mask to the archive is disabled. To record the sensitivity scale of the detection tool to the archive (see Displaying information from a detection tool (mask)), select |
the value value |
No |
Video stream | Main stream | If the camera supports multistreaming, |
select select the stream for which detection is needed |
Other |
Enable | Yes | By default, the detection tool is enabled. To disable, select |
the value value |
No |
Name | Neurocounter | Enter the detection tool name or leave the default name |
Decoder mode | Auto | Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding |
CPU |
GPU |
HuaweiNPU |
Number of frames processed per second | 1 | Specify the number of frames for the detection tool to process per second. The value must be in the |
range . Info |
---|
| The default values (three output frames and 1 FPS) mean that Neurocounter analyzes one frame once per second. If Neurocounter detects the specified number of objects (or more) on three frames, an event from the detection tool is generated. |
|
Type | Neurocounter | Name of the detection tool type (non-editable field) |
Advanced settings |
Detected objects | Yes | By default, detected objects aren't highlighted in the preview window. If you want to highlight detected objects, select the Yes value |
No |
Neural network file |
| If you use a custom neural network, select the corresponding file |
. Note |
---|
| - To train your neural network, contact AxxonSoft (
|
|
see - see Data collection requirements for neural network training).
- A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
- If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (Object type) and the selected processor for the neural network operation (Decoder mode).
|
|
If - If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
- To ensure the correct operation of the neural network on Linux OS, the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory.
| |
Number of measurements in a row to trigger detection | 3 | Specify the minimum number of frames on which the detection tool must detect a violation for the detection tool to trigger. The value must be in the range [1, 20] |
Object class |
| If necessary, specify the class of the detected object. |
If If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10. The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle. - If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object type, Neural network file).
- If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object type, Neural network file).
- If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object type, Neural network file).
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (Object type, Neural network file)
|
. with 3 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (Object type, Neural network file). |
|
Scanning window | Yes | If detection of small objects or objects in areas far away from the camera is ineffective, you can use the scanning mode. The scanning mode doesn’t provide absolute detection accuracy, but it can improve detection performance. |
To To enable the scanning mode, select |
the value value (see Configuring the Scanning mode) |
No |
Scanning window height | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. |
For For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
Scanning window step height | 0 | The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window respectively, the segments will line up one after another. |
Reducing Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load. Note |
---|
| The height and width of the scanning step must not be greater than the height and width of the scanning window—the detection tool will not operate with such settings. |
|
Scanning window step width | 0 |
Scanning window width | 0 | The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. |
For For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels |
Basic settings |
Detection threshold | 30 | Specify |
the for for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, |
but but some events from the detection tool may not be considered. The value must be in the range [0.05, 100] |
Mode | CPU | Select the processor for the neural network operation—CPU, one of NVIDIA GPUs, or one of Intel |
GPUs General information on Selecting Nvidia GPU when configuring detection tools). Note |
---|
| - If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run the detection tool.
- It may take several minutes to launch the algorithm
|
|
on NVIDIA GPU - on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (
|
|
see with 3- 3.11, Intel HDDL and Intel NCS aren’t supported.
|
|
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Intel NCS (not supported) |
Intel HDDL (not supported) |
Intel GPU |
Huawei NPU |
Number of alarm objects | 5 | Specify the number of objects at which an event occurs. |
The The value must be in the range [0, 100] |
Object type | Person | Select the recognition object to count |
Person (top-down view) |
Vehicle |
Person and vehicle (Nano)—low accuracy, low processor load |
Person and vehicle (Medium)—medium accuracy, medium processor load |
Person and vehicle (Large)—high accuracy, high processor load |
Trigger to | Greater than or equal |
to threshold to threshold value | Select when you want to generate an event. |
The will will generate events from the threshold value set in |
the the Number of alarm objects |
field field |
Less than or equal to threshold value |
Change in readings |
By default, the entire frame is the detection area. In the preview window, you can specify the detection areas using the anchor points points
Image Modified (see see Configuring the Detection Zone).
Image Modified
Info |
---|
|
For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again. The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again. |
To save the parameters of the detection tool, click the Apply
Image Modified button. To To cancel the changes, click the Cancel
Image Modified button.
It is possible to display the sensor and the number of objects in the monitored area in the Surveillance window on the layout (see Displaying the number of detected objects).
Example of configuring Neurocounter for solving typical tasks
By default, the Neurocounter is set to detect objects with a speed less than 0.3 m/s:
Parameter | Value |
---|
Other |
Number of frames processed per second | 1 |
Advanced settings
|
Number of measurements in a row to trigger detection | 3 |
Neural network file | Path to the *.ann neural network file. You can also select |
the value value. In this case, this field must be left blank |
Basic settings |
Detection threshold | 30 |
To solve tasks in which object speed differs from 0.3 m/s, you must increase Number of frames processed per second or/and decrease Number of measurements in a row to trigger detection. You must select the values empirically depending on the task conditions.