Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.

Previous page Next page

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 13 Next »

To configure the Face Detection VL, do the following:

  1. Go to the Detection Tools tab.

  2. Below the required camera, click Create… → Category: Face → Face Detection VL.

By default, the detection tool is enabled and set to detect faces.

If necessary, you can change the detection tool parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Check in listsYesThe Check in lists parameter is disabled by default. If you want to use this detection tool to check in lists of faces, select the Yes value (see Configuring face recognition based on the created lists)
No
Record objects trackingYes

The metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value

No
Video streamMain stream

If the camera supports multistreaming, select the stream for which detection is needed. For the correct operation of the Face Detection VL, we recommend using a high-quality video stream

Second stream
Other
EnableYesThe detection tool is enabled by default. To disable the detection tool, select the No value
No
NameFace Detection VLEnter the detection tool name or leave the default name

Decoder mode

Auto

Select a processor for decoding video. When you select:

  • Auto: GPU takes priority (decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding;
  • CPU: CPU is used for decoding;
  • GPU: GPU is used for decoding (decoding with NVIDIA NVDEC chips);
  • HuaweiNPU: HuaweiNPU is used for decoding
CPU
GPU
HuaweiNPU
Frame size change640

The analyzed frames are scaled down to a specified resolution (640 pixels on the longer side) by default. The value must be in the range [640, 10 000]. The following algorithm is used:

  • If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.
  • If the resulting resolution falls below the specified value, the algorithm stops and this resolution will be used further.
  • If the resulting resolution still exceeds the specified value, it is divided by two until it is less than the specified resolution.

Note

For example, the source image resolution is 2048*1536, and the specified value is set to 1000. In this case, the source resolution will be halved two times (512*384), as after the first division, the number of pixels on the longer side exceeds the specified value (1024 > 1000). If detection is performed on a higher resolution stream and detection errors occur, we recommend reducing the compression.

TypeFace Detection VL

Name of the detection tool type (non-editable field)

Use camera transformYesThe Use camera transform parameter is disabled by default. If you use a XingYun bispherical lens (see Configuring fisheye cameras), by default the detection tool receives the image of two spheres of 180° each to analyze. In this case, the recognition quality can deteriorate. To send the dewarped image to the detection tool, select the Yes value. This parameter is also valid for other transformations
No

Advanced settings

Attention!

Advanced configuration of the detection tool must be performed only with the assistance of AxxonSoft technical experts.

Analyze face rotation angleYes

The Analyze face rotation angle parameter is disabled by default. If you want to detect the face rotation angle, select the Yes value. This parameter allows you to filter out results that have a rotation and tilt angle greater than the specified values in a search for a specific face (see Search for similar face)

Note

The Analyze face rotation angle parameter affects the filtering of face detection events in the Event Board (see Configuring an Event BoardWorking with Event Boards).

When you use the following filters:

  • Triggered detection—all detected faces are displayed regardless of the face angle settings, even if the Analyze face rotation angle parameter is enabled.
  • Triggered specified detection—only faces that are within allowable rotation angles specified in the settings of the detection tool are displayed.
 
No
Face recognition algorithmAlgorithm 1

Select the face recognition algorithm:

  • Algorithm 1the recognition speed depends on the background and the number of faces in the frame. Works slower than Algorithm 3.
  • Algorithm 2high speed, low accuracy. The recognition speed depends on the number of faces in the frame.
  • Algorithm 3—medium speed, high accuracy. The recognition speed depends on the resolution of the image. Optimal for most scenes
Algorithm 2
Algorithm 3

Face rotation pitch ( ° )

45Specify the allowable face pitch angle in degrees. You must select the required value empirically. The value must be in the range [0, 90]
Face rotation roll ( ° )45

Specify the allowable face roll angle in degrees. You must select the required value empirically. The value must be in the range [0, 90]

Face rotation yaw from ( ° )

-45

Specify the minimum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90]

Face rotation yaw to ( ° )

45

Specify the maximum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90]

Minimum image quality

30

Specify the minimum image quality for face detection. You must select the required value empirically. The value must be in the range [1, 100]

Minimum number of detections

1

Specify the minimum number of detections after which a track will be considered a detected face. You must select the required value empirically. The value must be in the range [1, 10 000]

Number of frames between detections

3

Specify the number of frames between detections. The lower the value, the higher the probability that TrackEngine will detect a new face as soon as it appears in the selected area. The value must be in the range [1, 10 000]

Note

TrackEngine doesn't perform face recognition. It tracks the position of one person's face in a sequence of frames, choosing the best frame and preparing the necessary data for external systems. TrackEngine is based on face detection and analysis methods provided by the FaceEngine library.

Number of frames without detections

18

Specify the number of frames without detections. If face detection isn't performed in the selected area, TrackEngine will continue processing the specified number of frames before it considers the track lost. You must select the required value empirically. The value must be in the range [1, 10 000]

Send face images

Yes

The Send face images parameter is disabled by default. If you want to send face images to Axxon PSIM, select the Yes value

 

No

Track timeout

2

Specify the maximum time period in seconds after which the event will be sent. You must select the required value empirically. The value must be in the range [1, 10 000]

Basic settings
Biometric dataYes

The Biometric data parameter is enabled by default. When searching for faces (see Face search), the search results will contain found faces that are similar to the attached photo or a track based on the specified minimum similarity level (percentage). If you want to keep the biometric data confidential, select the No value. When searching for faces (see Face search) by an attached photo or a track, there will be no results

No
Face attributes recognition 

Yes

The Face attributes recognition parameter is disabled by default. If you want to save the gender and age information for each captured face to the database, select the Yes value

Note

The average error in age recognition is 5 years.

No
Medical mask detectionYes

The Medical mask detection parameter is disabled by default. If you use mask detection, select the Yes value (see Masks Detection)

No
Minimum face size10

Specify the minimum size of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100]

Minimum threshold of face authenticity60

Specify the minimum level of face recognition accuracy for the creation of a track. You must select the required value empirically. We recommend specifying the value of not less than 90. The higher the value, the fewer faces are detected, while the recognition accuracy increases. The value must be in the range [1, 100]

ModeCPU

Select a processor for the detection tool operation—CPU or Nvidia GPU (see Selecting Nvidia GPU when configuring detection tools)

Attention!

It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings.


Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Huawei NPU

Attention!

You can enable the advanced logging for SDK using the VL_SDK_VERBOSE_LOGGING=1 system variable (see Appendix 9. Creating system variable).

If necessary, in the preview window, set the rectangular area of the frame, in which you want to detect faces. You can specify the area by moving the anchor points .

Note

For convenience of configuration, you can "freeze" the frame. Click the  button. To cancel the action, click this button again.

The detection area is displayed by default. To hide it, click the  button. To cancel the action, click this button again.

To save the parameters of the detection tool, click the Apply  button. To cancel the changes, click the Cancel  button.

If necessary, you can create and configure detection sub-tools on the basis of the Face Detection VL based on metadata (see General information on metadata):

  1. Line Crossing—detection tool generates an event when a person moves across a line in the specified area of the frame and their face is detected.
  2. Entrance In Area—detection tool generates an event when a person appears in the specified area of the frame and their face is detected.
  3. Loitering In Area—detection tool generates an event when a person stays in the specified area of the frame for a long time and their face is detected.
  4. Masks Detection—detection tool generates an event when it captures a face with or without a mask.


  • No labels