Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section


Column
width50%


Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:
Table of Contents



Column


Configuration of the Neurocounter module includes: configuring the detection tool, selecting the area of interest. You can configure the Neurocounter module on The Neurocounter module can be configured on the settings panel of the Neurocounter object created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

The Neurocounter module is configured as follows:

Configuring the detection tool

  1. Go to the the settings panel of the Neurocounter object settings panel.
  2. Set the Show objects on image checkbox (1), if it is necessary checkbox to frame the detected objects on the image in the debug window (see Start the debug window).
  3. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detection tool settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detection tool settings.
  4. In the Number of frames for analysis and output field (2) field, specify the number of frames to be processed to determine the number of objects on them.
  5. In the Frames processed per second [0,.016, 100] field (3), set specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames interpolation will be performedfinding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the detection tool operation, but the higher the load on the processor.
  6. From the Send event drop-down list (4), select the condition by which an event with the number of detected objects will be generated:
    • If threshold exceeded is triggered if the number of detected objects in the image is greater than or equal to the value specified in the Alarm objects count field.
    • If threshold not reached is is triggered if the number of detected objects in the image is less than or equal to the value specified in the Alarm objects count field.
    • On count change is triggered every time the number of detected objects changes.
    • By period is triggered by a time period:
      1. In the Event periodicity field (5), set field, specify the time after which the event with the number of detected objects will be generated.
      2. From the Time interval drop drop-down list (6), select the time unit of the counter period: seconds, minutes, hours, days.
  7. In the Alarm objects count field (7), set specify the threshold number of detected objects in the area of interest. It is used in the If threshold exceeded and If threshold not reached conditions. The default value is 5.
  8. In the Recognition threshold [0, 100] field (8) field, enter the neural counter neurocounter sensitivity integer value from 0 to 100. The default value is 30.

    Info
    titleNote

    The neural counter The neurocounter sensitivity is determined experimentally. The lower the sensitivity, the more false triggerings there might behigher the probability of false alarms. The higher the sensitivity, the lower the less false triggerings there might beprobability of false alarms, however, some useful tracks might can be skipped (see Example of configuring Neurocounter for solving typical task).


  9. Set the Scanning mode checkbox to detect small objects. If a unique neural network is prepared for use, in the Tracking model field, click the Image Removed button (9), and select the file in the standard Windows Explorer window that opens. If the field is left blank, the default neural networks will be used for detection. They are selected automatically depending on the selected object type (11) and device (10).you enable this mode, the load on the system increases. So we recommend specifying a small number of frames processed per second in the Frames processed per second [0.016, 100] field. By default, the checkbox is clear. For more information on the scanning mode, see Configuring the Scanning mode.
  10. By default, the standard (default) neural network is initialized according to the object selected in the Object type drop-down list and the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the Image Added button to the right of the Tracking model field and in the standard Windows Explorer window, specify the path to the file.
    Note
    titleAttention!

    To train a neural network, contact the AxxonSoft technical support (see Data collection requirements for neural network training). A neural network trained for a specific scene allows you to detect objects of a certain type only (for example, a person, cyclist, motorcyclist, and so on).

  11. Set the Model quantization checkbox to enablemodel quantization. By default, the checkbox is clear. This parameter allows you to reduce the consumption of the GPU processing power.
    Info
    titleNote
    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of a detection tool with quantization enabled may take longer than a standard launch.
    4. If GPU caching is used, next time a detection tool with quantization will run without delay.
  12. If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    Info
    titleNote
    1. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object typeNeural network file).
    2. If you specify a class/classes missing from the neural network, tracks won't be displayed.
  13. From If the path to the neural network was not specified at step 7, from the Device drop-down list (10), select the device on which the neural network will operate. Auto the : CPU, one of NVIDIA GPUs, or one of Intel GPUs. Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.
    Note
    titleAttention!
    1. We recommend using the GPU.
    2. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
  14. From the Object type drop-down list (11), select the object type if the path to the neural network was not specified at step 7:
    • Human− the —the camera is directed at a person at the angle of 100-160°.
    • Human (top-down view) − the —the camera is directed at a person from above at a sight slight angle.
    • Vehicle− the —the camera is directed at a vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
    • Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
    • Person and vehicle (Large)—person and vehicle recognition, large neural network size.
      Info
      titleNote

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources.

    Specify the detection surveillance area on the video image:
    • The larger the neural network, the higher the accuracy of object recognition.

Selecting the area of interest

  1. Click the Settings button (12). The Detection settings window will open window opens.
    Image Modified
  2. Click the Stop video button (1) to pause the playback and capture the video imageframe.
  3. Click the the Area of interest button (2) to specify the area of interest. The button will be highlighted in blue.
    Image Added
  4. On the captured video image frame, sequentially set the anchor points of the area , the situation in which you want to analyze, by sequentially clicking the left mouse button (3). Only one area can be addedthe objects will be detected. The rest of the frame will be faded. If you don't specify the area of interest, the entire frame is analyzed.
    Info
    titleNote
    1. You can add only one area of interest. If you try to add a second area, the first
    area
    1. one will be deleted.
    After adding
    1. To delete an area,
    the rest of the video image will be darkened.
    Image Removed
    1. click the Image Added button to the right of the Area of interest button.

  5. Click the OK button to close the Detection settings window and return to the settings panel of the Neurocounter objectClick the OK button (4).
  6. Click the Apply button (13) Image Added button to save the changes.

Configuring the Neurocounter module is complete.