Page History
Panel | ||||||||||
---|---|---|---|---|---|---|---|---|---|---|
| ||||||||||
|
The Neurocounter module can be configured on the settings panel of the Neurocounter object created on the basis of the Camera object on the Hardware tab of the System settings dialog window.
Configuring the detection tool
The The Neurocounter module is configured as follows:
- Go to the Neurocounter object settings panel.
- Set the Show objects on image checkbox (1), if it is necessary to frame the detected objects on the image in the debug window (see Start the debug window).
- In the Number of frames for analysis and output field (12), specify the number of frames to be processed to determine the number of objects on them.
- In the Frames processed per second [0,.016, 100] field (23), set the number of frames processed per second that will be processed by the detection tool.
- From the Send event drop-down list (34), select the condition by which an event with the number of detected objects will be generated:
- If threshold exceeded - is triggered if the number of detected objects in the image is greater than or equal to the value specified in the Alarm objects count field.
- If threshold not reached - is triggered if the number of detected objects in the image is less than or equal to the value specified in the the Alarm objects count field.
- On count change - is triggered every time the number of detected objects changes.
- By period - is triggered by a time period:
- In the Event periodicity field (5), set the time after which the event with the number of detected objects will be generated.
- From the Event periodicity drop Time interval drop-down list (46), select the time unit of the counter period: seconds, minutes, hours, days.In the Event periodicity field (4), set the time after which the event with the number of detected objects will be generated.
- In the Alarm objects count field (57), set the threshold number of detected objects in the area of interest. It is used in the If threshold exceeded and If threshold not reached conditions.
In the Recognition threshold [0, 100] field (68), enter the neural counter sensitivity – integer value from 0 to 100.
Info title Note The neural The neural counter sensitivity is determined experimentally. The lower the sensitivity, the more false triggerings there might be. The higher the sensitivity, the less fewer false triggerings there might be, however, some useful tracks might be skipped.
- Click If a unique neural network is prepared for use, in the Tracking model field, click the button (79), and select the file in the standard Windows box Explorer window that opens. If the field is left blank, select the neural network file with the neural counter model.default neural networks will be used for detection. They are selected automatically depending on the selected object type (11) and device (10).
- If the path to the neural network was not specified at step 7, from the In the Device drop-down list (810), select the device on which the neural network will operate.Specify the detection surveillance area on the video image:. Auto − the device is selected automatically: GPU gets the highest priority, followed by Intel GPU, then CPU.
- From the Object type drop-down list (11), select the object type if the path to the neural network was not specified at step 7:
- Human—the camera is directed at a person at the angle of 100-160°.
- Human (top-down view)—the camera is directed at a person from above at a sight angle.
- Vehicle—the camera is directed at a vehicle at the angle of 100-160°;
- Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
- Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
- Person and vehicle (Large)—person and vehicle recognition, large neural network size.
Info title Note Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition.
Selecting the area of interest
- Click the Settings button (1112). The Detection settings window will open.
- Click the Stop video button button (1) to capture the video image (1).
- Click the Area of interest button (2).Specify
- area on which fire/smoke recognition will be detected (3).
On the captured video image (3), set the anchor points of the area, the situation in which you want to analyze, by sequentially clicking the left mouse button. Only one area can be added. If you try to add a second area, the first area will be deleted. After adding an area (1), the rest of the video image will be darkened.
- Click the OK button (42).
- Click To apply the changes to the Neurocounter module, click the Apply button (1213).
Configuring the Neurocounter module is complete.