Documentation for Axxon PSIM 1.0.0-1.0.1.

Previous page Configuring smart video detection tools  Requirements for video parameters to perform the forensic search in the archive Next page


The forensic search in the archive is the search for video records in the archive using the video image metadata. The forensic search of video recordings in the archive is performed by the parameters of an object that appeared in the video camera field of view (FOV), e.g. its direction of movement.

Note.

Since the metadata is recorded to the VMDA metadata storage at the same time as the archive is being recorded, the forensic search in the archive imported from edge storage can not be performed. The process of the archive importing from edge storage is described in the Configuring the access to the archive in edge storage section.

Such criteria as Line crossing and Motion in the area, which are available in the video surveillance window, are used for the forensic search. For more information about these search criteria, see Search by line crossing and Search by motion in the area.

To perform the forensic search in Axxon PSIM, it is necessary to create and configure the following objects:

  1. The VMDA metadata storage (see Creating and configuring VMDA metadata storage).
  2. The Tracker (see Creating and configuring the Tracker object).
  3. Optional: the Neurotracker object being part of the DetectorPack subsystem for forensic search by metadata received from the Neurotracker. See the DetectorPack. User Guide in AxxonSoft documentation repository for details on configuration and operation of this module.

One VMDA metadata storage can be used with several Axxon PSIM Servers with Trackers. However, this depends on resource capabilities available on metadata storage Server. Please refer to AxxonSoft Platform Calculator for more specific calculation of Server quantity.

It is possible to create the VMDA detection tools on the basis of the Tracker object (see Creating and configuring the VMDA detection). If the VMDA detection tools are configured, their triggering can initiate the video recording to the archive. When recording is enabled, the Tracker object records all trajectories of detected objects, regardless of whether the VMDA detection tools are created on its basis or not.

The Tracker object and the VMDA detection tools also allow extended object classification and object classification using the neural filters. In the extended object classification, the following object types are supported:

  • a human;
  • a group of humans (with a possibility to count number of people in a group);
  • a vehicle;
  • noise;
  • an object carried into the area;
  • an object carried out of the area;
  • other.

Configuration and use of this classification method is performed using the scripts and registry keys. For details, see the CAM_VMDA_DETECTOR VMDA detection section of the Programming Guide.

The use of neural filters allows to classify any types of objects and cut off false alarms of detectors more accurately. Neural filters are configured individually for each use case (for details, see Configuring the neural filter).

  • No labels