Go to documentation repository
Page History
...
- Arrange cameras so that their fields of view are directed downwards onto the surface (floor, ground) on which objects are moving. This ensures stable and accurate tracking.
- Configure the For each of these cameras, configure the detector: Object tracker or , Neural tracker for each of these camerastracker, Human tracker VL, or Motion detector. You cannot use other detectors. The operation of this feature requires metadata (see Metadata database).
Note title Attention! - For Tag&Track Lite to work on the basis of the Motion detector, in the Object tracking parameter of the detector, select the Yes value (see Configuring the Motion detector).
- For Tag&Track Lite to work in the archive mode (see Using Tag&Track Lite in the archive mode), in the Record objects tracking parameter of the detector, select the Yes value (see Configuring the Object tracker, Configuring the Neural tracker, Configuring the Human tracker VL, Configuring the Motion detector).
Enable Object tracking in the surveillance window of each selected camera on the layout.
- Place all necessary cameras on the map (see Adding video cameras).
Specify the exact location of each camera on the map (see Configuring a camera in standard map viewing mode). Configure the intersection areas of the fields of view for all adjacent cameras. Intersection areas are essential for tracking objects. The size of an intersection area must be at least three times the size of a typical tracked object (for example, a person).
- Configure Immersion modefor each camera (see Configuring cameras in immersion mode). You must specify four connections and match the points in the video image (for example, the four corners of an intersection) with their exact locations on the map. This is a key step for the feature to work.
...
Overview
Content Tools