Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.

    Info
    titleNote

    The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They aren't displayed in the Event event viewer.

  2. Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
  3. Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.

    Info
    titleNote

    This parameter doesn't affect the VMDA search and is used just for the visualization. For this parameter, the titles database is used.

  4. Set the Model quantization checkbox to enable model quantization. By default, the checkbox is cleared. This parameter allows you to reduce the consumption of the GPU processing power.
    Info
    titleNote
    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of the detector with quantization enabled can take longer than the standard launch.
    4. If GPU caching is used, the next time the detector with quantization will run without delay.
  5. From the Object type drop-down list, select the object type for analysis:
    • Human—camera is pointed at the person at the angle of 100-160°;
    • Human (top-down view)—camera is pointed at the person from above at a slight angle;
    • People view from above (Nano)—camera is pointed at the person from above at a slight angle, small network size;
    • People view from above (Medium)camera is pointed at the person from above at a slight angle, average network size;
    • People view from above (Large)camera is pointed at the person from above at a slight angle, large network size;
    • Vehicle—camera is pointed at the vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—detects person and vehicle, small network size;
    • Person and vehicle (Medium)detects person and vehicle, average network size;
    • Person and vehicle (Large)detects person and vehicle, large network size.
      Info
      titleNote

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of the object recognition but the greater the load on the CPU.

  6. By default, the standard neural network is initialized according to the selected object type on step 5 and device on step 7. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the  button to the right of the Tracking model field and specify its file in the standard Windows Explorer window that opens.
    Note
    titleAttention!

    To train the neural network, contact AxxonSoft technical support (see Data collection requirements for neural network training). The use of the trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).

  7. From the Device drop-down list, select the one on which the neural network will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)—device is selected automatically: The NVIDIA GPU takes the highest priority, then goes the Intel GPU, and then the CPU.
    Note
    titleAttention!
    1. We recommend using the GPU.
    2. It can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    3. In the Detector Pack 2.0subsystem, the Intel HDDL support is removed. Thus, when you update from the 1.0 version, the Not supported option is automatically selected instead of this device option, and detectors won't operate. To resume detector operation, select the required device from the list.
  8. From the Process drop-down list, select which objects must be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objects—an object is considered to be moving if, during the entire lifetime of its track, it shifted by more than 10% of its width or height. If you use this parameter, you can reduce the number of false positives;
    • Only stationary objects—an object is considered stationary if, during the entire lifetime of its track, it shifted by no more than 10% of its width or height. If the stationary object starts moving, the detector generates an event, and the object is no longer considered stationary.
      Info
      titleNote

      The selection of only moving objects and only stationary objects isn't mutually exclusive, as some tracks cannot be determined as either moving or stationary. First, the neural network detects all objects, and after that, the detector filters out unnecessary tracks in accordance with the selected value of the Process setting.

  9. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the detector settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the detector settings.

...

  1. Go to the Additional settings tab on the settings panel of the neurotracker.

  2. In the Recognition threshold [0,100] field, specify the neurocounter sensitivity—an integer number in the range from 0 to 100.

    Info
    titleNote

    The neurotracker sensitivity is determined experimentally. The lower the sensitivity, the higher the probability of false alarms. The higher the sensitivity, the lower the probability of false alarms, however, some useful tracks can be skipped (see Examples of configuring neural tracker for solving typical tasks).

  3. In the Frames processed per second [0.016, 100] field, specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames the interpolation is performedfinding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the tracking, but the higher the load on the processor.
    Info
    titleNote

    The recommended value is at least 6 FPS. For fast moving objects (running person, vehicle)—at least 12 FPS (see Examples of configuring neural tracker for solving typical tasks).

  4. In the Minimum number of triggering [2, 100] field, specify the minimum number of the neurotracker triggerings to display the object track. The higher the value of this parameter, the longer it takes from the object detection moment to the display of its track. The low value of this parameter can lead to false positives. The default value is 6. The value range is from 2 to 100. The entered number that is greater than the maximum value or less than the minimum value from the specified range is automatically adjusted to the maximum or minimum value, respectively.
  5. In the Track hold time (s) field, specify the time in seconds after which the object track is considered lost in the range from 0.3 to 1000. This parameter is useful in situations when one object in the frame temporarily overlaps another. For example, when a large vehicle completely overlaps a small one.

    Info
    titleNote

    If an object (track) is close to the frame boundary, then approximately half of the time specified in the Track hold time (s) field must elapse from the moment the object disappears from the frame until its track is deleted.

  6. Set the Scanning mode checkbox to detect small objects. If you enable this mode, the load on the system increases. That is why, on step 3, we recommend specifying a small number of frames processed per second. By default, the checkbox is cleared. For more information on the scanning mode, see Configuring the Scanning mode.
  7. If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1 1, 10 10.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    Info
    titleNote
    1. If you leave the field blank, the tracks of all available classes from the neural network are displayed (Object typeNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes are displayed (Object typeNeural network file).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network are displayed (Object typeNeural network file).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network are displayed (Object typeNeural network file).

...

  1. Go to the Neurofilter tab on the settings panel of the neurotracker.

  2. Set the Enable filtering checkbox to enable the neurofilter. By default, the checkbox is cleared.
  3. By default, the standard neural network according to the selected device on step 4 is initialized. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the  button to the right of the Tracking model field and specify its file in the standard Windows Explorer window that opens.
    Note
    titleAttention!

    To train the neural network, contact contact AxxonSoft technical support (see Data collection requirements for neural network training). The use of the trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).

  4. From the Device drop-down list, select the one on which the neural network will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs.
    Info
    titleNote
    1. The device for the neurofilter must match the device specified for the neurotracker in step 7 of the main settings.
    2. It can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings.
  5. Click the Apply button to save the changes.

    Info
    titleNote

    If necessary, create and configure the Neurotracker VMDA detectors on the basis of the Neurotracker object. The procedure of creating and configuring the Neurotracker VMDA detectors is similar to creating and configuring the VMDA detectors for the regular tracker. The only difference is that you must create the Neurotracker VMDA detectors on the basis of the Neurotracker object and not on the basis of the Tracker object (see Creating and configuring the VMDA detection). Also, when you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA detectors generate an event, is configured using the LongInZoneTimeout2 registry key, not the LongInZoneTimeout. The alarm generation mode is set for any type of VMDA detector similar to the VMDA detector for the regular tracker using the VMDA.oneAlarmPerTrack registry key (see Registry keys reference guide).

...