Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Tip

Hardware requirements for the Face detector VL and its sub-detectors

Hardware requirements for neural analytics operation

Video stream and scene requirements for the Face detector VL and its sub-detectors

Image requirements for the Face detector VL and its sub-detectors

Checking in lists of faces

Examples of macros used when working with lists of faces

To configure the Face Detection detector VL, do the following:

  1. Go to the 

  2. Detection Tools
  3. Detectors tab.

  4. Below the required camera, click Create… → Category: Face → Face
  5. Detection
  6. detector VL.

By default, the detection tool detector is enabled and set to detect faces.

If necessary, you can change the detection tool detector parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Check in listsYesThe Check in lists parameter is
enabled
disabled by default. If you
don’t
want to use this
detection tool for real-time face recognition (
detector to check in lists of faces
)
, select
the No value (see Configuring face recognition based on the created lists
the Yes value (see Checking in lists of faces)
No
Record objects trackingYes

The metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value

No
Video streamMain stream

If the camera supports multistreaming, select the stream for which detection is needed. For the correct operation of the Face

Detection

detector VL, we recommend using a high-quality video stream

Second stream
Other
EnableYesThe
detection tool
detector is enabled by default. To disable the
detection tool
detector, select
the 
the No value
No
NameFace
Detection
detector VLEnter the
detection tool
detector name or leave the default name

Decode key frames

YesNo
The Decode key frames parameter is disabled by default. Using this option reduces the load on the Server, but at the same time the quality of detection is reduced. To decode only the key frames, select the Yes value. We recommend enabling this parameter for "blind" (without video image display) Servers on which you want to perform detection. For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame

Decoder mode

Auto

Select a

processing resource

processor for decoding video

streams

. When you select

a GPU, a stand-alone graphics card

:

  • Auto: GPU takes priority (
when
  • decoding with
NVIDIA
  • Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding;
  • CPU: CPU is used for decoding;
  • GPU: GPU is used for decoding (decoding with Nvidia NVDEC chips);
  • HuaweiNPU: HuaweiNPU is used for decoding
CPU
GPU
HuaweiNPU
Frame size change640

The analyzed frames are scaled down to a specified resolution (640 pixels on the longer side) by default. The value must be in the range [640, 10 000]. The following algorithm is used:

  • If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.
  • If the resulting resolution falls below the specified value, the algorithm stops and this resolution will be used further.
  • If the resulting resolution still exceeds the specified value, it is divided by two until it is less than the specified resolution.
Info
titleNote

For example, the source image resolution is 2048*1536, and the specified value is set to1000. In this case, the source resolution will be halved two times (512*384), as after the first division, the number of pixels on the longer side exceeds the specified value (1024 > 1000). If detection is performed on a higher resolution stream and detection errors occur, we recommend reducing the compression.

TypeFace
Detection
detector VL

Name of the

detection tool

detector type (non-editable field)

Use camera transformYesThe Use camera transform parameter is disabled by default. If you use a XingYun bispherical lens (see Configuring fisheye cameras), by default the detection tool receives the image of two spheres of 180° each to analyze. In this case, the recognition quality can deteriorate. To send the dewarped image to the detection tool, select the Yes value. This parameter is also valid for other transformations

No

Advanced settings

Note
titleAttention!

Advanced configuration of the

detection tool

detector must be performed only with the assistance of AxxonSoft technical experts.

Analyze face rotation angleYes

The Analyze face rotation angle parameter is disabled by default. If you want to detect the face rotation angle, select the Yes value. This parameter allows you to filter out results that have a rotation and tilt angle greater than the specified values in a search for a specific face (

see

see Search for similar face)

Info
titleNote

The Analyze face rotation angle parameter affects the filtering of face detection events in the Event Board (see Configuring an Event Board, Working with Event Boards).

When you use the following filters:

  • Triggered detection—all detected faces are displayed regardless of the face angle settings, even if the Analyze face rotation angle parameter is enabled.
  • Triggered specified detection—only faces that are within allowable rotation angles specified in the settings of the detector are displayed.
No
Face recognition algorithmAlgorithm 1

Select the face recognition algorithm:

  • Algorithm 1
the
  • —the recognition speed depends on the background and the number of faces in the frame. Works slower than Algorithm 3.
  • Algorithm 2
high
  • —high speed, low accuracy. The recognition speed depends on the number of faces in the frame.
  • Algorithm 3—medium speed, high accuracy. The recognition speed depends on the resolution of the image. Optimal for most scenes
Algorithm 2
Algorithm 3

Face rotation pitch ( ° )

45Specify the allowable angle of face pitch
angle
in degrees. You must select the required value empirically. The value must be in the range [0, 90]
Face rotation roll ( ° )45

Specify the allowable angle of face roll

angle

in degrees. You must select the required value empirically. The value must be in the range [0, 90]

Face rotation yaw from ( ° )

-45

Specify the minimum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90]

Face rotation yaw to ( ° )

45

Specify the maximum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90]

Minimum image quality

30

Specify the minimum image quality for face detection. You must select the required value empirically. The value must be in the range [1, 100]

Minimum number of detections

1

Specify the minimum number of detections after which a track

will be

is considered a detected face. You must select the required value empirically. The value must be in the range [1, 10 000]

Number of frames between detections

3

Specify the number of frames between detections. The lower the value, the higher the probability that TrackEngine will detect a new face as soon as it appears in the selected area. The value must be in the range [1, 10 000]

Info
titleNote

TrackEngine doesn't perform face recognition. It tracks the position of one person's face in a sequence of frames, choosing the best frame and preparing the necessary data for external systems. TrackEngine is based on face detection and analysis methods provided by the FaceEngine library.

Number of frames without detections

18

Specify the number of frames without detections. If face detection isn't performed in the selected area, TrackEngine will continue processing the specified number of frames before it considers the track lost. You must select the required value empirically. The value must be in the range [1, 10 000]

Send face images

Yes

The Send face images parameter is disabled by default. If you want to send face images to Axxon PSIM, select the Yes value

 

No

Track timeout

2

Specify the maximum time period in seconds after which

the

an event

will be sent

is sent. You must select the required value empirically. The value must be in the range [1, 10 000]

Basic settings
Biometric dataYes

The Biometric data parameter is

enabled

disabled by default. When

searching

you search for faces (

see Search for all recognized faces), the search results will contain only found faces without

see Face search) by an attached photo or

a track. If you search for a similar face (see Search for similar face)

track, as well as when you check in lists of faces, there will be no results

. To get results with an attached photo or a track when searching

If you want to keep biometric data, select the Yes value. In this case, when you search for faces (see Face search),

select the No value

the search results will contain faces that are similar to the attached photo or track based on a specified minimum similarity level (percentage). For the correct check in lists of faces, this parameter must also be enabled

No
Face attributes recognition 

Yes

The Face attributes recognition parameter is disabled by default. If you want to save the gender and age information for each captured face to the database, select the Yes value

Info
titleNote

The average error in age recognition is 5 years.

No
Medical mask detectionYes

The Medical mask detection parameter is disabled by default. If you use mask

detection

detector, select

the

the Yes value (

see Configuring Masks Detection

see Mask detector VL)

No
Minimum face size10

Specify the minimum size of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100]

Minimum threshold of face authenticity60

Specify the minimum level of face recognition accuracy for the creation of a track. You must select the

relevant

required value empirically. We recommend specifying the value of not less than 90. The higher the value, the fewer faces are detected, while the recognition accuracy increases. The value must be in the range [1, 100]

ModeCPU

Select

the

a processor for the

detection tool

detector operation—CPU or

NVIDIA

Nvidia GPU (see

General information on configuring detection
may

can take several minutes to launch the algorithm on

NVIDIA

Nvidia GPU after you apply the settings.


Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Huawei NPU
Note
titleAttention!

You can enable the advanced logging for SDK using SDK using the   VL_SDK_VERBOSE_LOGGING=1   system variable (see Appendix 9. Creating system variable).

If necessary, in the preview window, set the rectangular area of the frame , in which you want to detect faces. You can specify the area by moving the anchor points points Image Modified.

Image Modified

Info
titleNote

For convenience of configuration, you can "freeze" the frame. Click the the Image Modified button button. To cancel the action, click this button again.

The detection area is displayed by default. To hide it, click the the Image Modified button button. To cancel the action, click this button again.

To save the parameters of the detection tooldetector, click the Apply Image Modified button. To cancel the changes, click the Cancel Image Modified button.

Configuration of the Face detector VL is complete. If necessary, you can create and configure sub-detectors on the basis of the Face detector VL based on metadata (see Metadata database):

  1. Line crossing—detector generates an event when a person moves across a line in the specified area of the frame and their face is detected.
  2. Entrance in area—detector generates an event when a person appears in the specified area of the frame and their face is detected.
  3. Loitering in area—detector generates an event when a person stays in the specified area of the frame for a long time and their face is detected.
  4. Mask detector VL—detector generates an event when it captures a face with or without a mask.