Metadata is information that describes object-related content in the camera FOV.
In Axxon Next metadata can be obtained in two ways:
To extract metadata from video, you have to de-compress and analyze the video stream which, increases the Server's workload, thus limiting the number of available camera channels. |
The following tools are used for server-side analysis and metadata generation:
Object tracker and Neural Tracker generate metadata containing the following information about moving objects in FOV: object type, position, size and color, motion speed and direction, etc. |
VMD.
VMD generates less accurate data. It does not detect object type and color. |
Face detection metadata contains facial bounding boxes and their positions, as well as facial vectors. |
Automatic Number Plate Recognition (LPR/ANPR) tools.
ANPR metadata contains license plate bounding boxes and their positions, as well as vehicle registration numbers. |
Metadata from pose detection tools contains information on positions and pose (skeleton) of all people in FOV. |
Equipment detection tool (PPE)
Equipment detection metadata contains information about the position of all the people in FOV. |
The metadata is used for the following system options:
If a camera uses several sources of metadata, the required source is selected automatically, except for MomentQuest. To perform face/license number searches, only metadata from corresponding detection tools is used. |
Metadata files are stored as files in the object trajectory database in the local Server directory that was selected when installing Axxon Next (see Installation) in the vmda_db\VMDA_DB.0\vmda_schema subfolder.
If necessary, you can place metadata on any available network storage (see Configuring storage of the system log and metadata). |