To configure the Meta-detector, do the following:

  1. Go to the Detectors tab.

  2. Below the required camera, click Create… → Category: RetailMeta-detector.

The detector is enabled by default. In order for the detector to generate events, you must specify text queries in English in the Text query parameters. You can specify up to three text queries.

If necessary, you can change the detector parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Video streamMain stream

If the camera supports multistreaming, select the stream for which detection is needed

Other
NameMeta-detectorEnter the detector name or leave the default name
EnableYesThe detector is enabled by default. To disable the detector, select the No value
No
TypeMeta-detector

Name of the detector type (non-editable field)

Number of frames processed per second
1

Specify the number of frames for the detector to process per second. The value must be in the range [0.016, 100]

Decoder mode

Auto

Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding

CPU
GPU
HuaweiNPU

Basic settings

Detection mode


 

 

 

 

 

CPU

Select a processor for the neural network operation (see Hardware requirements for neural analytics operationSelecting Nvidia GPU when configuring detectors)

Attention!

Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel GPU

Detection threshold

20

Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the recognition accuracy, but some events from the detector can be missed. The value must be in the range [0.05, 100]

Text query 1




Enter a text query in English.

For example:

  • Human and dog
  • Running human
  • Fallen human

Attention!

Optimal query structure:

  • Object type (human, vehicle, and so on).
  • Key attribute of an object (color, size, and so on);
  • Surroundings (road, building, room, and so on);
  • Additional scene specifications (angle, position in frame).

Examples: Woman wearing a black dress, White car pedestrian crossing top view.

We recommend using chatbots with artificial intelligence of the latest generation, such as ChatGPT-4o, that are able to:

  • Generate optimized queries relying on the syntax of the BLIP-2 model (generate text descriptions from an image).
  • Work with web links.

Text query 2

Text query 3

Advanced settings

Number of measurements in a row to trigger detection

15

Set the minimum number of frames on which the detector must detect a match between a text query and an image in order to generate an event. The value must be in the range [10, 100]

Meta-search settings (starting with Detector Pack 3.14)

Search by queries


 

Yes

The parameter is disabled by default. If you want to save the description of a frame, if it differs from the previous frame, to continue searching by text queries, select the Yes value

No

Ignore similar


 

Yes

The parameter is enabled by default. Frames that are similar to previous frames are skipped. To disable ignoring of similar frames, select the No value

 

No

Difference threshold (%)

25

Specify the percentage threshold for the difference between a frame and the previous frame required to save the frame. The higher the value, the more frames are skipped without being saved as similar. The value must be in the range [0, 100]. The default value is 25. In this case, frames that differ by 25% or less are skipped and aren't saved

To save the parameters of the detector, click the Apply  button. To cancel the changes, click the Cancel  button.

The Meta-detector is configured.

  • No labels