Documentation for Detector Pack 2.8. Documentation for other versions of Detector Pack is available too.

Previous page Next page


You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 24 Current »

On the page:


Attention!

The Neurotracker module works only in Intellect 4.11.0 and higher version.

The Neurotracker module is configured on the Neurotracker object settings panel. This object is created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

Main settings

The Neurotracker module is configured as follows:

  1. Go to the Main settings tab.
  2. Set the Generate event on appearance/disappearance of the track checkbox (1), if it is necessary to generate the event when the track appears/disappears.

    Note

    The track appearance/disappearance events are generated only in the debug window (see Enabling the Debug window). They are not displayed in the Events protocol.

  3. Set the Show objects on image checkbox (2), if it is necessary to highlight the detected object with a frame when viewing live video.
  4. Set the Save tracks to show in archive checkbox (3), if it is necessary to highlight the detected object with a frame when viewing the archive.

    Note

    This parameter does not affect VMDA search and is used just for the visualization. For this parameter, the titles database is used.

  5. From the Object type drop-down list (4), select the object type. if the path to the neural network file is not set (step 6):
    • Human—the camera is directed at a person at the angle of 100-160°;
    • Human (top-down view)—the camera is directed at a person from above at a slight angle;
    • Vehicle—the camera is directed at a vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—person and vehicle recognition, small neural network size;
    • Person and vehicle (Medium)—person and vehicle recognition, medium neural network size;
    • Person and vehicle (Large)—person and vehicle recognition, large neural network size.

      Note

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition.

  6. If a unique neural network is prepared for use, then in the Tracking model field, click the  button (5) and select the file in the standard Windows Explorer window that opens. If the field is left blank, the default neural networks will be used for detection. They are selected automatically depending on the selected object type (4) and device (8).
  7. In the Recognition threshold [0, 100] field (6), specify the neurocounter sensitivity—an integer value in the range from 0 to 100.

    Note

    The objects detection threshold is determined experimentally. The lower the detection threshold, the more false triggerings there might be. The higher the detection threshold, the less false triggerings there might be, however, some useful tracks might be skipped. See Examples of configuring neural tracker for solving typical tasks.

  8. In the Frames processed per second [0.016, 100] field (7), set the number of frames per second in the range from 0.016 to 100 that will be processed by the neural network. All other frames will be interpolated. The higher the specified value, the more accurate the tracking, but the higher the CPU load.
  9. In the Device drop-down list (8), select the device on which the neural network will operate. Auto—the device is selected automatically: GPU gets the highest priority, followed by Intel GPU, then CPU.
  10. In the Minimum number of triggering field (9), specify the minimum number of neurotracker triggers required to display the object track. The higher the value of this parameter, the longer it will take from the object detection moment to the display of its track. At the same time, a low value of this parameter can lead to false positives. The default value is 6. The value range is 1-10. The entered value that is greater than the maximum value or less than the minimum value from the specified range, is automatically adjusted to the maximum/minimum value.
  11. In the Track hold time (s) field (10), specify the time in seconds after which the object track is considered lost. This parameter is useful in situations where one object in the frame temporarily overlaps another. For example, when a large car completely overlaps a small one.

    Note

    If the object track is close to the frame boundary, then approximately half the time specified in the Track hold time (s) field should elapse from the moment the object disappears from the frame until its track is deleted.

  12. From the Process drop-down list (11), select which objects should be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objects—an object is considered to be moving if during the entire lifetime of its track, it has shifted by more than 10% of its width or height. Using this parameter may reduce the number of false positives;
    • Only stationary objects—an object is considered stationary if during the entire lifetime of its track, it has shifted by no more than 10% of its width or height.

Setting the area of interest

  1. Click the Settings button (12). The Detection settings window will open.
  2. Click the Stop video button (1) to capture a video frame.
  3. Click the Area of interest button (2).
  4. On the captured video frame (3), set the anchor points of the area, the situation in which you want to analyze (1), by sequentially clicking the left mouse button. Only one area can be added. If you try to add a second area, the first one will be deleted. After adding an area, the rest of the video image will be darkened.
  5. Click the OK button (2).

Neurofilter settings

You can use the neural filter to sort out some of the tracks. For example, the neural tracker detects all freight trucks, and the neural filter leaves only those tracks that correspond to trucks with cargo door open. To configure a neural filter, do the following:

    1. Go to the Neurofilter tab.
    2. Set the Enable filtering checkbox (1).

    3. Select the required neural network file for the neural filter (2). If the network path is not set, then the default network is used depending on the selected device (3). If the network path is specified, the neural filter is created with the specified network.

    4. From the Device drop-down list (3), select the device on which the neural network for the neural filter will operate. Autothe device is selected automatically: GPU gets the highest priority, followed by Intel GPU, then CPU.

  1. Click the Apply button (13).

    Note

    If necessary, create and configure the NeuroTracker VMDA detection tools on the basis of the Neurotracker object. The procedure of creating and configuring the NeuroTracker VMDA detection tools is similar to creating and configuring the VMDA detection tools for a regular tracker. The only difference is that it is necessary to create the NeuroTracker VMDA detection tools on the basis of the Neurotracker object, and not the Tracker object (see Creating and configuring the VMDA detection). Also, if you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA detection tools are triggered, is configured using the LongInZoneTimeout2 registry key, not LongInZoneTimeout. The procedure of configuring the alarm generation mode for any type of VMDA detection tools is similar to the VMDA detection tools for a regular tracker using the VMDA.oneAlarmPerTrack registry key (see Registry keys reference guide).

The Neurotracker software module configuration is complete.

If events are periodically received from several objects, then for convenience, you can create and configure neurotracker track counters (see Configuring the neurotracker track counter).

  • No labels