The Tracker object registers the object tracks in the camera FOV during recording and saves them into the VMDA metadata storage.
To configure the Tracker object, do the following:
To configure the Tracker object, do the following:
Set the Show objects on image checkbox (1), if it is necessary to frame the objects in the surveillance window. The frames are displayed both on live and archive video, e.g. in Video surveillance monitor, Event Viewer, Operator protocol etc.
To display the objects tracker ID, set 1 value for the DrawDetectorNumbers line parameter in the HKEY_LOCAL_MACHINE\SOFTWARE\AxxonSoft\INTELLECT\Video (HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\AxxonSoft\INTELLECT\Video for 64-bit system) registry. Frame color is adjusted by the DrawDetectorColors parameter in the same registry:
|
To allow the VMDA detectors monitor the objects abandoned in the camera FOV, set the Abandoned objects detection checkbox (2). See the video requirements for this feature to operate in Video requirements to be met for abandoned object detection tool of the Tracker object operation.
If there is no need to monitor the objects abandoned in the camera FOV, disable the Abandoned objects detection option to reduce Server load. When you disable this option, you also disable the detectors configured to monitor the abandoned objects. |
By default, if the Abandoned objects detection option is enabled, when viewing the live video and archive video, only the abandoned objects and disappeared objects will be framed on a video image. In order for the VMDA detector to detect only the disappeared objects, or only the abandoned objects, it is necessary to change the value of the VMDA.filterGivenOrTaken and VMDA.determineGivenTaken registry keys (for details, see Registry keys reference guide, for more information about working with the registry, see Working with Windows OS registry).
The function of framing the abandoned and disappeared objects on the video image is available for the standard and the converted fisheye video |
If the camera is installed on the moving object, set the Camera shake removal checkbox (3) to stabilize the image and reduce the error in the detector operation.
When the Camera shake removal option is active, the Server load increases. |
Embedded detector. Metadata comes from the detectors embedded in the camera (see Embedded detectors).
Tracks recording from the embedded detectors must be supported by the device. Furthermore, this functionality has to be integrated in Intellect. |
Set the value of the Sensitivity parameter by moving the slider to the required position (7). The value of this parameter corresponds to the minimum value of the moving object averaged brightness on which the detector will trigger only to its motion, not to the video signal noise (including snow, rain, etc.).
If the slider is in the left end position, then the value of the Sensitivity parameter is selected automatically. |
Set the Waiting for loss slider into position corresponding to time when the object stops moving and is considered to be active and the detector is still tracking it (8). If the object is motionless for a longer period than the set value, then the object is considered to be lost.
If the lost object starts moving it is considered as a new object. |
Fine-tuning of the abandoned objects detection tool is performed using the registry keys (see Registry keys reference guide). |
To disable analysis in the camera FOV, in the Tracker mask tab (1), click the Set mask button (2) and specify the area in the video preview area. It is possible to disable analysis in several areas, i.e. to set several masks. To set an additional mask, click the Set mask button again and specify an additional masked area in the video preview area.
To zoom in on the video preview area for a more precise selection of the masked area, right-click the video preview area while holding down the Shift key on the keyboard after clicking the Set mask button. The video opens in a separate window, which can be resized by dragging its borders. After setting the mask in this window, close it by clicking the button in the upper right corner.
To set the minimum and maximum sizes of the detected object, do the following:
If the perspective configuration is enabled, the parameters for the maximum and minimum object size for detection are ignored. Width, m and Height, m parameters in the Perspective tab (see Configuring perspective) are considered instead. |
To stop the playback in the video preview area, click the Stop button (2).
To resume the playback, click the Resume playback button: ![]() |
In the Minimum size group, specify the minimum size of the detected object as a percentage of total image area (3) or click the Configure button and specify the size in the video preview area (4).
The range of values of the Minimum size parameter varies from 0 to 30% relatively to the frame size. |
In the Maximum size group, specify the maximum size of the detected object as a percentage of total image area (5) or click the Configure button and specify the size in the video preview area (6).
Maximum size should be greater than the minimum size and not greater than 100%. If the maximum size is equal to the minimum size, no detection is performed. |
To save the changes, click the Apply button. |