Documentation for DetectorPack PSIM 1.0.1.

Previous page Next page

On the page:

Attention!

The Neurotracker program module works only in Axxon PSIM of version 1.0.1 and higher.

The Neurotracker program module registers object tracks in the camera FOV during recording using the neural network and saves them to the VMDA metadata storage (see Creating and configuring VMDA metadata storage).
The configuration of the Neurotracker program module includes: main and additional settings of the detector, selection of the area of interest, the neurofilter configuration

You can configure the Neurotracker program module on the settings panel of the Neurotracker object that is created on the basis of the Camera object on the Hardware tab of the System settings window.

Main settings of the detector

You can configure the main settings of the detector on the Main settings tab on the settings panel of the Neurotracker object.

  1. Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.

    Note

    The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They aren't displayed in the Event viewer.

  2. Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
  3. Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.

    Note

    This parameter doesn't affect the VMDA search and is used just for the visualization. For this parameter, the titles database is used.

  4. Set the Model quantization checkbox to enable model quantization. By default, the checkbox is cleared. This parameter allows you to reduce the consumption of the GPU processing power.

    Note

    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of the detector with quantization enabled can take longer than the standard launch.
    4. If GPU caching is used, the next time the detector with quantization will run without delay.
  5. From the Object type drop-down list, select the object type for analysis:
    • Humancamera is pointed at the person at the angle of 100-160°;
    • Human (top-down view)camera is pointed at the person from above at a slight angle;
    • Vehiclecamera is pointed at the vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)detects person and vehicle, small network size;
    • Person and vehicle (Medium)detects person and vehicle, average network size;
    • Person and vehicle (Large)detects person and vehicle, large network size.

      Note

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of the object recognition.

  6. By default, the standard neural network is initialized according to the selected object type on step 5 and device on step 7. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the  button to the right of the Tracking model field and specify its file in the standard Windows Explorer window that opens.

    Attention!

    To train the neural network, contact AxxonSoft technical support (see Data collection requirements for neural network training). The use of the trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).

  7. From the Device drop-down list, select the one on which the neural network will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)—device is selected automatically: The NVIDIA GPU takes the highest priority, then goes the Intel GPU, and then the CPU.

    Attention!

    1. We recommend using the GPU.
    2. It can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
  8. From the Process drop-down list, select which objects must be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objectsan object is considered to be moving if, during the entire lifetime of its track, it shifted by more than 10% of its width or height. If you use this parameter, you can reduce the number of false positives;
    • Only stationary objectsan object is considered stationary if, during the entire lifetime of its track, it shifted by no more than 10% of its width or height. If the stationary object starts moving, the detector generates an event, and the object is no longer considered stationary.

      Note

      The selection of only moving objects and only stationary objects isn't mutually exclusive, as some tracks cannot be determined as either moving or stationary. First, the neural network detects all objects, and after that, the detector filters out unnecessary tracks in accordance with the selected value of the Process setting.

  9. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detector settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detector settings.

Selecting the area of interest

  1. Click the Settings button. As a result, the detector settings window opens.
  2. In the Detection settings window, click the Stop video button (1) to pause the playback and capture the frame of the video image.
  3. Click the Area of interest button to specify the area of interest. The button is highlighted in blue.

  4. On the captured frame of the video image, use the mouse to sequentially set the anchor points of the area in which the objects are detected. The rest of the frame is faded. There can be only one area of interest. To delete an area, click the  button. If you don't specify the area of interest, the entire frame is analyzed.
  5. Click the OK button (2) to close the Detection settings windowand return to the settings panel of the detector.

Additional settings

  1. Go to the Additional settings tab on the settings panel of the neurotracker.

  2. In the Recognition threshold [0, 100] field, specify the neurocounter sensitivity—an integer number in the range from 0 to 100.

    Note

    The neurotracker sensitivity is determined experimentally. The lower the sensitivity, the higher the probability of false alarms. The higher the sensitivity, the lower the probability of false alarms, however, some useful tracks can be skipped, see Examples of configuring neural tracker for solving typical tasks).

  3. In the Frames processed per second [0.016, 100] field, specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames the interpolation is performed—finding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the tracking, but the higher the load on the processor.

    Note

    The recommended value is at least 6 FPS. For fast moving objects (running person, vehicle)—at least 12 FPS (see Examples of configuring neural tracker for solving typical tasks).

  4. In the Minimum number of triggering [2, 100] field, specify the minimum number of the neurotracker triggerings to display the object track. The higher the value of this parameter, the longer it takes from the object detection moment to the display of its track. The low value of this parameter can lead to false positives. The default value is 6. The value range is from 2 to 100. The entered number that is greater than the maximum value or less than the minimum value from the specified range is automatically adjusted to the maximum or minimum value, respectively.
  5. In the Track hold time (s) field, specify the time in seconds after which the object track is considered lost in the range from 0.3 to 1000. This parameter is useful in situations when one object in the frame temporarily overlaps another. For example, when a large vehicle completely overlaps a small one.

    Note

    If an object (track) is close to the frame boundary, then approximately half of the time specified in the Track hold time (s) field must elapse from the moment the object disappears from the frame until its track is deleted.

  6. Set the Scanning mode checkbox to detect small objects. If you enable this mode, the load on the system increases. That is why, on step 3, we recommend specifying a small number of frames processed per second. By default, the checkbox is cleared. For more information on the scanning mode, see Configuring the Scanning mode.
  7. If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.

    Note

    1. If you leave the field blank, the tracks of all available classes from the neural network are displayed (Object typeNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes are displayed (Object typeNeural network file).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network are displayed (Object typeNeural network file).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Object typeNeural network file).

Neurofilter

You can use the neurofilter to sort out some of the tracks. For example, the neurotracker detects all freight trucks, and the neurofilter leaves only those tracks that correspond to trucks with cargo doors open. To configure the neurofilter, do the following:

  1. Go to the Neurofilter tab on the settings panel of the neurotracker.

  2. Set the Enable filtering checkbox to enable the neurofilter. By default, the checkbox is cleared.
  3. By default, the standard neural network according to the selected device on step 4 is initialized. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the  button to the right of the Tracking model field and specify its file in the standard Windows Explorer window that opens.

    Attention!

    To train the neural network, contact AxxonSoft technical support (see Data collection requirements for neural network training).The use of the trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).

  4. From the Device drop-down list, select the one on which the neural network will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)—device is selected automatically: The NVIDIA GPU gets the highest priority, followed by the Intel GPU, then the CPU.

    Note

    1. The device for the neurofilter must match the device specified for the neurotracker in step 7 of the main settings. If you select Auto, the neurofilter runs on the same processor as the neurotracker, according to the priority.
    2. It can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings.
  5. Click the Apply button to save the changes.  

Note

If necessary, create and configure the Neurotracker VMDA detectors on the basis of the Neurotracker object. The procedure of creating and configuring the Neurotracker VMDA detectors is similar to creating and configuring the VMDA detectors for the regular tracker. The only difference is that you must create the Neurotracker VMDA detectors on the basis of the Neurotracker object and not on the basis of the Tracker object (see Creating and configuring the VMDA detection). Also, when you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA detectors generate an event, is configured using the LongInZoneTimeout2 registry key, not the LongInZoneTimeout. The alarm generation mode is set for any type of VMDA detector similar to the VMDA detector for the regular tracker using the VMDA.oneAlarmPerTrack registry key (see Registry keys reference guide).

Configuring the Neurotracker program module is complete.

If events are periodically received from several objects, then for convenience, we recommend creating and configuring the neurotracker track counters (see Configuring the neurotracker track counter).


  • No labels