Documentation for Axxon PSIM 1.0.0-1.0.1.

Previous page Recommendations on configuring smart video detection tools  Configuring perspective Next page


On the page:

The Tracker object registers the object tracks in the camera FOV during recording and saves them into the VMDA metadata storage.

To configure the Tracker object, do the following:

  1. Adjust the basic settings.
  2. Optional: add mask to disable analysis in any part(s) of the camera FOV.
  3. Optional: set detected object sizes or configure the perspective settings (see Configuring perspective).

Basic settings

To configure the Tracker object, do the following:

  1. Go to the Hardware tab in the System settings dialog box.
  2. In the objects tree in the Hardware tab, select the Camera object, corresponding to the camera, the video image from which you want to analyze.
  3. Create the Tracker object on the base of the Camera object. The settings panel for the Tracker object appears on the right in the Hardware tab.
  4. Set the Show objects on image checkbox (1), if it is necessary to frame the objects in the surveillance window. The frames are displayed on live video only, e.g. in Video surveillance monitor.

    Note

    To display the objects tracker ID, set 1 value for the DrawDetectorNumbers line parameter in the HKEY_LOCAL_MACHINE\SOFTWARE\AxxonSoft\PSIM\Video (HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AxxonSoft\PSIM\Video for 64-bit system) registry.

    Frame color is adjusted by the DrawDetectorColors parameter in the same registry:

    • When value is 1, the frame color is the average color in the area.
    • When value is 0, the frame color is white.
  5. To allow the VMDA detectors monitor the objects abandoned in the camera FOV, set the Abandoned objects detection checkbox (2). The video requirements for this feature to operate are described in Video requirements to be met for abandoned object detection tool of the Tracker object operation, and the settings for VMDA detection tools are described in Creating and configuring the VMDA detection. Customization specifics:

    1. To ensure the operation of the Abandoned objects detection, it is necessary to additionally configure the Abandoned object detector type (see Motion in the Area detection tool configuration).

    2. By default, if the Abandoned objects detection option is enabled, when viewing the live video and archive video, only the abandoned objects and disappeared objects will be framed on a video image. In order for the VMDA detector to detect only the disappeared objects, or only the abandoned objects, it is necessary to change the value of the VMDA.filterGivenOrTaken and VMDA.determineGivenTaken registry keys (for details, see Registry keys reference guide, for more information about working with the registry, see Working with Windows OS registry).
    3. The function of framing the abandoned and disappeared objects on the video image is available for the standard and the converted fisheye video.

      Note

      If there is no need to monitor the objects abandoned in the camera FOV, disable the Abandoned objects detection option to reduce Server load. When you disable this option, you also disable the detectors configured to monitor the abandoned objects.

  6. If the camera is installed on the moving object, set the Camera shake removal checkbox (3) to stabilize the image and reduce the error in the detector operation.

    Attention!

     When the Camera shake removal option is active, the Server load increases.
  7. If the video in the preview window is too distorted on the Tracker mask, Detection parameters, and Perspective tabs, set the Show actual video aspect ratio checkbox (4). In this case, the video may not occupy the entire area of the preview window.
  8. Go to the Basic settings tab (5).
  9. In the Metadata sources table, set the checkboxes next to the objects used for metadata creation (6):
    1. Internal source. The resources of the Tracker object are used as metadata source.
    2. Embedded detector. Metadata comes from the detectors embedded in the camera (see Embedded detectors).

      Note

      Tracks recording from the embedded detectors must be supported by the device. Furthermore, this functionality has to be integrated in Axxon PSIM.

    3. VideoIntellect detector. Metadata comes from VideoIntellect (see VideoIntellect embedded detector).
  10. Set the value of the Sensitivity parameter by moving the slider to the required position (7). The value of this parameter corresponds to the minimum value of the moving object averaged brightness on which the detector will trigger only to its motion, not to the video signal noise (including snow, rain, etc.).

    Note

    If the slider is in the left end position, then the value of the Sensitivity parameter is selected automatically.
  11. Set the Waiting for loss slider into position corresponding to time when the object stops moving and is considered to be active and the detector is still tracking it (8). If the object is motionless for a longer period than the set value, then the object is considered to be lost. 

    Note

    If the lost object starts moving it is considered as a new object.

    Note

    Fine-tuning of the abandoned objects detection tool is performed using the registry keys (see Registry keys reference guide).

  12. In the Objects in frame, no more than field, specify the maximum number of objects in the frame that are detected (9). If the number of objects is equal to or exceeds the specified value, then the MD_LIMIT event is generated (see CAM Camera section of Guide for creating scripts (programming)). If the parameter is not set or equal to 0, this event is not generated.

Tracker mask

To disable analysis in the camera FOV, in the Tracker mask tab (1), click the Set mask button (2) and specify the area in the video preview area. It is possible to disable analysis in several areas, i.e. to set several masks. To set an additional mask, click the Set mask button again and specify an additional masked area in the video preview area.

To zoom in on the video preview field for a more precise selection of the masked area, right-click the video preview area while holding down the Shift key on the keyboard after clicking the Set mask button. The video opens in a separate window, which can be resized by dragging its borders. After setting the mask in this window, close it by clicking the  button in the upper right corner.

Detected object sizes

To set the minimum and maximum sizes of the detected object, do the following:

Note

If the perspective configuration is enabled, the parameters for the maximum and minimum object size for detection are ignored. Width, m and Height, m parameters in the Perspective tab (see Configuring perspective) are considered instead.

  1. Go to the Detection parameters tab (1).
  2. To stop the playback in the video preview area, click the Stop button (2).

    Note

    To resume the playback, click the Resume playback button:

  3. In the Minimum size group, specify the minimum size of the detected object as a percentage of total image area (3) or click the Configure button and specify the size in the video preview area (4).

    Note

    The range of values of the Minimum size parameter varies from 0 to 30% relatively to the frame size.
  4. In the Maximum size group, specify the maximum size of the detected object as a percentage of total image area (5) or click the Configure button and specify the size in the video preview area (6).

    Note

    Maximum size should be greater than the minimum size and not greater than 100%. If the maximum size is equal to the minimum size, no detection is performed.

Note

To save the changes, click the Apply button.
  • No labels