Page History
Panel | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
The Neurocounter module The Neurocounter module can be configured on the settings panel of the Neurocounter object created on the basis of the Camera object on the Hardware tab of the System settings dialog window.
...
Configuring the detection tool
The Neurocounter module is configured as follows:
- Go to the Neurocounter object settings panel.
- Set the Show objects on image checkbox (1), if it is necessary to highlight a detected object with a frame frame the detected objects on the image in the Monitor interface object window.the debug window (see Start the debug window).
- In the Number of frames for analysis and output field (2), specify the number of frames to be processed to determine the number of objects on them.
- In the Frames processed per second [0.016, 100] field (3), set the number of frames processed per second by the detection tool.
- From the Send event drop-down list (24), select the condition by which an event with the number of detected objects will be generated:
- If threshold is exceeded - is triggered if the number of detected objects in the image is greater than or equal to the value specified in the Alarm objects count field.
- If threshold is not reached - is triggered if the number of detected objects in the image is less than or equal to the value specified in the the Alarm objects count field.
- On count change - is triggered every time the number of detected objects changes.
- By period - is triggered by a time period:
- In the Event periodicity field (5), set the time after which the event with the number of detected objects will be generated.
- From the Time interval dropFrom the drop-down list (36), select the time unit of the counter period: seconds, minutes, hours, days.
- In the Neurocounter period the Alarm objects count field (47), set the time after which the event with the threshold number of detected objects will be generated.In the Number of frames for analysis and output field (5), specify the number of frames to be processed to determine the number of objects on themin the area of interest. It is used in the If threshold exceeded and If threshold not reached conditions.
In the Sensitivity, %Recognition threshold [0, 100] field (68), enter the neural counter sensitivity – integer value from 0 to 100.
Info title Note The neural The neural counter sensitivity is determined experimentally. The lower the sensitivity, the more false triggerings there might be. The higher the sensitivity, the less fewer false triggerings there might be, however, some useful tracks might be skipped.
In the Time in seconds between processed frames field (7), specify the time interval in seconds between the analyzed frames. This parameter is related to the Number of frames for analysis and output parameter.
Info title Note The default values (Number of frames for analysis and output: 3 frames and Time in seconds between processed frames: 1 second) mean that the neural counter will analyze 3 frames, one frame per second. After processing 3 frames, depending on the condition of the event generation, either an event with the number of detected objects will be generated, or processing of the next 3 frames will start.
- In the Alarm objects count field (8), set the threshold number of detected objects in the area of interest. It is used in the If threshold is exceeded and If threshold is not reached conditions.
- Click the button (9), and in the standard Windows box that opens, select the neural network file with the neural counter model.
- In the Device drop-down list (10), select the device on which the neural network will operate.
- If a unique neural network is prepared for use, in the Tracking model field, click the button (9), and select the file in the standard Windows Explorer window that opens. If the field is left blank, the default neural networks will be used for detection. They are selected automatically depending on the selected object type (11) and device (10).
- If the path to the neural network was not specified at step 7, from the Device drop-down list (10), select the device on which the neural network will operate. Auto − the device is selected automatically: GPU gets the highest priority, followed by Intel GPU, then CPU.
- From the Object type drop-down list (11), select the object type if the path to the neural network was not specified at step 7:
- Human—the camera is directed at a person at the angle of 100-160°.
- Human (top-down view)—the camera is directed at a person from above at a sight angle.
- Vehicle—the camera is directed at a vehicle at the angle of 100-160°;
- Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
- Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
- Person and vehicle (Large)—person and vehicle recognition, large neural network size.
Info title Note Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition.
Selecting the area of interest
- Click the Settings button (12Specify the detection surveillance area on the video image:Click the Setup button (11). The Detection settings window will open.
- Click the Stop video button button (1) to capture the video image (1).
- Click the Area of interest button (2).
- Specify area on which fire/smoke recognition will be detected (3).
On the captured video image (3), set the anchor points of the area, the situation in which you want to analyze, by sequentially clicking the left mouse button. Only one area can be added. If you try to add a second area, the first area will be deleted. After adding an area (1), the rest of the video image will be darkened.
- Click the OK button (42).
- Click To apply the changes to the Neurocounter module, click the Apply button (1213).
Configuring the Neurocounter Neurocounter module is complete.