Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Go to the Main settings tab.
  2. Set the Generate event on appearance/disappearance of the track checkbox (1) if it is necessary to generate the event when the track appeared/disappeared.
  3. Set the Show objects on image checkbox (2) if it is necessary to highlight the detected object with a frame when viewing live video.
  4. Set the Save tracks to show in archive checkbox (3), if it is necessary to highlight the detected object with a frame when viewing the archive.

    Info
    titleNote

    This parameter does not affect VMDA search and is used just for the visualization. For this parameter, the titles database is used.


  5. Select the required neural network file with the tracking model (4).

  6. In the Detection threshold: 0..100 field (5), specify the objects detection threshold within the range from 0 to 100.

    Info
    titleNote

    The objects detection threshold is determined experimentally. The lower the detection threshold, the more false triggerings there might be. The higher the detection threshold, the less false triggerings there might be, however, some useful tracks might be skipped.


  7. In the Frame rate limit field  Frames processed per second [0,016, 100] field (6), specify set the number of frames per second to that will be analyzed processed by the neural network. All other frames will be interpolated. The higher the specified value, the more accurate the tracking, but the higher the CPU load.
  8. In the Device drop-down list (7) select the device where the neural network will operate.
  9. In the Minimum number of triggering field (8), specify the minimum number of neurotracker triggers required to display the object track. The higher the value of this parameter, the longer it will take from the object detection moment to the display of its track. At the same time, a low value of this parameter can lead to false positives.
  10. In the Track hold time (s) field (9), specify the time in seconds after which the object track is considered lost. This parameter can be useful in situations where one object in the frame temporarily overlaps another. For example, when a larger car completely overlaps a smaller one.
  11. From the Process drop-down list (10) select which objects should be processed by the neural network:
    • All objects — moving and stationary objects.
    • Only moving objects — an object is considered to be moving if during the entire lifetime of its track it has shifted by more than 10% of its width or height. Using this parameter may reduce the number of false positives.
    • Only stationary objects — an object is considered stationary if during the entire existence of its track it has not shifted by more than 10% of its width or height.
  12. Set the monitoring area on the video image:

    1. Click the Setup button (11). The Detection settings window will open.

    2. Click the Stop video button (1) to capture a video frame.
    3. Click the Area of interest button (2).
    4. Set the area in which objects will be detected (3).
    5. Click OK (4).
  13. You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the neural tracker detects all freight trucks, and the neural filter leaves only those tracks that correspond to trucks with cargo door open. To set up a neural filter, do the following:

    1. Go to the Neurofilter tab.
    2. Set the Enable filtering checkbox (1).

    3. Select the required neural network file for the neural filter (2).

    4. From the Device drop-down list (3), select the device on which the neural filter will operate.

  14. Click the Apply button (12).

    Info
    titleNote

    If necessary, create and configure the NeuroTracker VMDA detection tools on the basis of the Neurotracker object. The procedure of creating and configuring the NeuroTracker VMDA detection tools is similar to creating and configuring the VMDA detection tools for a regular tracker. The only difference is that it is necessary to create the NeuroTracker VMDA detection tools on the basis of the Neurotracker object, and not the Tracker object (for details, see Creating and configuring VMDA detection). Also, if you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA detection tools are triggered, is configured using the LongInZoneTimeout2 registry key, not LongInZoneTimeout (see Registry keys reference guide).


...