Documentation for DetectorPack PSIM 1.0.1.

Previous page Objects image requirements for neural tracker  Configuring the neurotracker track counter Next page

On the page:

The Neurotracker module registers object tracks in the camera FOV during recording using a neural network and saves them to the VMDA metadata storage (see Creating and configuring VMDA metadata storage).

Configuration of the Neurotracker module includes: main and additional settings of the detection tool, selection of the area of interest, configuration of the neurofilter. 

You can configure the Neurotracker module on the settings panel of the Neurotracker object that is created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

Main settings of the detection tool

You can configure the main settings of the detection tool on the Main settings tab on the settings panel of the Neurotracker object.

  1. Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.

    Note

    The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They are not displayed in the Event viewer.

  2. Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
  3. Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.

    Note

    This parameter does not affect the VMDA search and is used just for the visualization. For this parameter, the titles database is used.

  4. Set the Model quantization checkbox to enable model quantization. By default, the checkbox is clear. This parameter allows you to reduce the consumption of the GPU processing power.

    Note

    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of a detection tool with quantization enabled may take longer than a standard launch.
    4. If GPU caching is used, next time a detection tool with quantization will run without delay.
  5. From the Object type drop-down list, select the object type for analysis:
    • Human—the camera is directed at a person at the angle of 100-160°;
    • Human (top-down view)—the camera is directed at a person from above at a slight angle;
    • Vehicle—the camera is directed at a vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
    • Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
    • Person and vehicle (Large)—person and vehicle recognition, large neural network size.

      Note

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition.

  6. By default, the standard (default) neural network is initialized according to the object selected in the Object type drop-down list and the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the button to the right of the Tracking model field and in the standard Windows Explorer window, specify the path to the file.

    Attention!

    To train a neural network, contact the AxxonSoft technical support (see Data collection requirements for neural network training). A neural network trained for a specific scene allows you to detect objects of a certain type only (for example, a person, cyclist, motorcyclist, and so on).

  7. From the Device drop-down list, select the device on which the neural network will operate: CPU, one of NVIDIA GPUs, or one of Intel GPUs. Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.

    Attention!

    1. We recommend using the GPU.
    2. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
  8. From the Process drop-down list, select which objects must be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objects—an object is considered to be moving if during the entire lifetime of its track, it has shifted by more than 10% of its width or height. Using this parameter can reduce the number of false positives;
    • Only stationary objects—an object is considered stationary if during the entire lifetime of its track, it has shifted by no more than 10% of its width or height. If a stationary object starts moving, the detection tool triggers and the object is no longer considered stationary.
  9. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detection tool settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detection tool settings.

Selecting the area of interest

  1. Click the Settings button. The Detection settings window opens.
  2. Click the Stop video button (1) to pause the playback and capture the frame.
  3. Click the Area of interest button (2) to specify the area of interest. The button will be highlighted in blue.

  4. On the captured frame, sequentially set the anchor points of the area (1), in which the objects will be detected. The rest of the frame will be faded. You can add only one area of interest. To delete an area, click the button. If you don't specify the area of interest, the entire frame is analyzed.
  5. Click the OK button (2) to close the Detection settings window and return to the settings panel of the Neurotracker object.

Additional settings

  1. Go to the Additional settings tab on the settings panel of the Neurotracker object.

  2. In the Recognition threshold [0, 100] field, specify the neurocounter sensitivity—an integer value in the range from 0 to 100.

    Note

    The neurotracker sensitivity is determined experimentally. The lower the sensitivity, the higher the probability of false alarms. The higher the sensitivity, the lower the probability of false alarms, however, some useful tracks can be skipped (see Examples of configuring neural tracker for solving typical tasks).

  3. In the Frames processed per second [0.016, 100] field, specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames interpolation will be performedfinding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the detection tool operation, but the higher the load on the processor.

    Note

    The recommended value is at least 6 FPS. For fast moving objects (running person, vehicle)—at least 12 FPS (see Examples of configuring neural tracker for solving typical tasks).

  4. In the Minimum number of triggering [2, 100] field, specify the minimum number of neurotracker triggers required to display the object track. The higher the value of this parameter, the longer it takes from the object detection moment to the display of its track. A low value of this parameter can lead to false positives. The default value is 6. The value range is 2-100. The entered value that is greater than the maximum value or less than the minimum value from the specified range, is automatically adjusted to the maximum or minimum value, respectively.
  5. In the Track hold time (s) field, specify the time in seconds after which the object track is considered lost in the range from 0.3 to 1000. This parameter is useful in situations where one object in the frame temporarily overlaps another. For example, when a large vehicle completely overlaps a small one.

    Note

    If an object (track) is close to the frame boundary, then approximately half the time specified in the Track hold time (s) field must elapse from the moment the object disappears from the frame until its track is deleted.

  6. Set the Scanning mode checkbox to detect small objects. If you enable this mode, the load on the system increases. So we recommend specifying a small number of frames processed per second in the Frames processed per second [0.016, 100] field. By default, the checkbox is clear. For more information on the scanning mode, see Configuring the Scanning mode.
  7. If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.

    Note

    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object typeNeural network file).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object typeNeural network file).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file).

Neurofilter

You can use the neurofilter to sort out some of the tracks. For example, the neurotracker detects all freight trucks, and the neurofilter leaves only those tracks that correspond to trucks with cargo door open. To configure a neurofilter, do the following:

  1. Go to the Neurofilter tab on the settings panel of the Neurotracker object.

  2. Set the Enable filtering checkbox to enable neurofilter. By default, the checkbox is clear.
  3. By default, the standard (default) neural network is initialized according to the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the button to the right of the Tracking model field and in the standard Windows Explorer window, specify the path to the file.

    Attention!

    To train a neural network, contact the AxxonSoft technical support (see Data collection requirements for neural network training). A neural network trained for a specific scene allows you to detect objects of a certain type only (for example, a person, cyclist, motorcyclist, and so on).

  4. From the Device drop-down list, select the device on which the neural network will operate: CPU, one of NVIDIA GPUs, or one of Intel GPUs. Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.

    Attention!

    1. The device for the neurofilter must match the device specified for the neurotracker in the Device drop-down of the main settings. If you select Auto, the neurofilter will run on the same processor as the neurotracker, according to the priority.
    2. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings.
  5. Click the Apply button to save the changes.

    Note

    If necessary, create and configure the NeuroTracker VMDA detection tools on the basis of the Neurotracker object. The procedure of creating and configuring the NeuroTracker VMDA detection tools is similar to creating and configuring the VMDA detection tools for a regular tracker. The only difference is that it is necessary to create the NeuroTracker VMDA detection tools on the basis of the Neurotracker object, and not the Tracker object (see Creating and configuring the VMDA detection). Also, if you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA detection tools are triggered, is configured using the LongInZoneTimeout2 registry key, not LongInZoneTimeout. The procedure of configuring the alarm generation mode for any type of VMDA detection tools is similar to the VMDA detection tools for a regular tracker using the VMDA.oneAlarmPerTrack registry key (see Registry keys reference guide).

Configuration of the Neurotracker module is complete.

If events are periodically received from several objects, then for convenience, you can create and configure neurotracker track counters (see Configuring the neurotracker track counter).

  • No labels