Go to documentation repository
Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.
Hardware requirements for the Face detector VA and its sub-detectors
Hardware requirements for neural analytics operation
Video stream and scene requirements for the Face detector VA and its sub-detectors
Image requirements for the Face detector VA and its sub-detectors
Optimizing the operation of the Face detector VA on GPU in Windows OS
Attention!
To start and correctly operate the Face detector VA on GPU, you must create cache beforehand (see Optimizing the operation of the Face detector VA on GPU in Windows OS).
To configure the Face detector VA, do the following:
Go to the Detectors tab.
- Below the required camera, click Create… → Category: Face → Face detector VA.
By default, the detector is enabled and set to detect faces.
If necessary, you can change the detector parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|---|---|
Object features | ||
Check in lists | Yes | The Check in lists parameter is disabled by default. If you want to use this detector to check in lists of faces, select the Yes value (see Checking in lists of faces) |
No | ||
Record objects tracking | Yes | The metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value |
No | ||
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed. For the correct operation of the Face detector VA, we recommend using a high-quality video stream |
Second stream | ||
Other | ||
Enable | Yes | The detector is enabled by default. To disable the detector, select the No value |
No | ||
Name | Face detector VA | Enter the detector name or leave the default name |
Decoder mode | Auto | Select a processor for decoding video. When you select:
|
CPU | ||
GPU | ||
HuaweiNPU | ||
Liveness detection | Yes | The Liveness detection parameter is disabled by default. Liveness detection is used when there is a photo of a face instead of an alive person. If you want to save to the database the information about whether each captured face is a photo (Yes/No), select the Yes value |
No | ||
Liveness threshold | 30 | Specify the threshold value in percent at which a face is defined as liveless. The higher the value, the fewer captured faces are detected as a photo. At the same time, the quality of recognizing whether the captured face is a photo (Yes/No) is higher. You must select the required value empirically. The value must be in the range [1, 100] |
Type | Face detector VA | Name of the detector type (non-editable field) |
Use camera transform | Yes | The Use camera transform parameter is disabled by default. If you use a XingYun bispherical lens (see Configuring fisheye cameras), by default the detector receives the image of two spheres of 180° each to analyze. In this case, the recognition quality can deteriorate. To send the dewarped image to the detector, select the Yes value. This parameter is also valid for other transformations |
No | ||
Advanced settings Attention! Advanced configuration of the detector must be performed only with the assistance of AxxonSoft technical experts. | ||
Analyze face rotation angle | Yes | The Analyze face rotation angle parameter is disabled by default. If you want to detect the face rotation angle, select the Yes value. This parameter allows you to filter out results that have a rotation and tilt angle greater than the specified values in a search for a specific face (see Search for similar face) Note The Analyze face rotation angle parameter affects the filtering of face detection events in the Event Board (see Configuring an Event Board, Working with Event Boards). When you use the following filters:
|
No | ||
Face detection algorithm | ALG1 (high speed, low accuracy) | Select the face detection algorithm:
|
ALG2 (medium speed, medium accuracy) | ||
ALG3 (low speed, high accuracy) | ||
Face detection period (msec) | 250 | Specify the time in milliseconds between face search operations in a frame. The larger the value, the less load on the Server, but some faces may not be recognized. The value must be in the range [1, 10 000] |
Face rotation pitch ( ° ) | 45 | Specify the allowable angle of face pitch in degrees. You must select the required value empirically. The value must be in the range [0, 90] |
Face rotation roll ( ° ) | 45 | Specify the allowable angle of face roll in degrees. You must select the required value empirically. The value must be in the range [0, 90] |
Face rotation yaw from ( ° ) | -45 | Specify the minimum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90] |
Face rotation yaw to ( ° ) | 45 | Specify the maximum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90] |
False mask detections filtering | Yes | The False mask detections filtering parameter is enabled by default. Additional algorithm considers that the track is a face of a person with a mask. If algorithm considers that the track is a face with a mask with a lower probability, this track is disregarded (see the Minimum filtering threshold for face mask detection parameter). If you don’t need to use the parameter, select the No value |
No | ||
Filter false alarms | Yes | In some cases, the detector can mistake other objects for a face. The Filter false alarms parameter is enabled by default. If the parameter is enabled, false results of the face recognition appear in the detection feed (see Face recognition and search), but they are ignored when searching for faces in the archive. This parameter filters out objects that are not faces at the stage of face vector construction and its recording to the metadata database. If you don’t need to use this parameter, select the No value |
No | ||
Frame size change | 1920 | The analyzed frames are scaled down to a specified resolution (1920 pixels on the longer side) by default. The value must be in the range [640, 10 000]. The following algorithm is used:
Note For example, the source image resolution is 2048*1536, and the specified value is set to 1000. In this case, the source resolution will be halved two times (512*384), as after the first division, the number of pixels on the longer side exceeds the specified value (1024 > 1000). If detection is performed on a higher resolution stream and detection errors occur, we recommend reducing the compression. |
Ignore repeated recognitions | Yes | The Ignore repeated recognitions parameter is disabled by default. If you want to ignore repeated recognitions of the same face, select the Yes value |
No | ||
Minimum face masking threshold | 70 | Specify the minimum threshold for recognizing a mask on the face. You must select the required value empirically. We recommend specifying the value of not less than 70. The value must be in the range [1, 100] |
Minimum face quality for face mask detection | 30 | Specify the minimum quality of the face image when recognizing masks (see Mask detector VA). You must select the required value empirically. We recommend specifying the value of not less than 30. The value must be in the range [1, 100] |
Minimum face recognition quality | 50 | Specify the minimum quality of the face image when recognizing faces without masks. You must select the required value empirically. We recommend specifying the value of not less than 50. The value must be in the range [1, 100] |
Minimum filtering threshold | 50 | If you enable the Filter false alarms parameter, specify the minimum percentage at which the additional algorithm considers the track to be a person's face. If the algorithm considers the track to be a face with a lower probability, this track is disregarded. You must select the required value empirically. We recommend specifying the value of not less than 50. The value must be in the range [1, 100] |
Minimum filtering threshold for face mask detection | 30 | If you enable the False mask detections filtering parameter, specify the minimum percentage at which the additional algorithm considers the track to be a person's face with a mask. If the algorithm considers the track to be a face with a mask with a lower probability, this track is disregarded. You must select the required value empirically. We recommend specifying the value of not less than 30. The value must be in the range [1, 100] |
Period of ignoring repeated recognitions | 2 | To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the period in minutes during which new recognized faces are compared with the previous ones to detect similarities. The value must be in the range [2, 30] |
Process color frames | Yes | A black and white frame is processed by default. If you want the detector to use a color frame for processing, select the Yes value |
No | ||
Repeated recognitions similarity threshold | 85 | To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the similarity threshold of the face to the previously recognized faces in percent. If the similarity threshold is lower than the specified value, the face is recognized as a new one. The value must be in the range [1, 100] |
Send face images | Yes | The Send face images parameter is disabled by default. If you want to send face images to Axxon PSIM, select the Yes value |
No | ||
Track loss time | 500 | Specify the time in milliseconds after which the track of the captured face is considered to be lost. This parameter applies when a captured face moves in the frame and gets hidden behind an obstacle for some time. If this time is less than the specified value, the face is recognized as the same. The value must be in the range [1, 10 000] |
Basic settings | ||
Age and gender | Yes | The Age and gender parameter is disabled by default. If you need to save age and gender information for each captured face in the database, select the Yes value Note The average error in age recognition is 5 years. |
No | ||
Algorithm of liveness detection | Not selected | The Not selected value is selected by default. Liveless face is a photo of a face instead of a live person. If necessary, select the algorithm for detecting a liveless face:
You must configure the parameter empirically. Algorithm for determining a liveless face: The algorithm assigns to each face a Liveness score—a value in the range [0, 100]. The decision on the liveness of a face is made based on the value specified in the Liveness threshold parameter according to the following logic:
Example:
|
Algorithm 1 (Photo I) | ||
Algorithm 2 (Photo I, Textures I) | ||
Algorithm 3 (Photo II, Textures II) | ||
Algorithm 4 (Comprehensive) | ||
Biometric data | Yes | The Biometric data parameter is disabled by default. When you search for faces (see Face search) by an attached photo or track, as well as when you check in lists of faces, there will be no results If you want to keep biometric data, select the Yes value. In this case, when you search for faces (see Face search), the search results will contain faces that are similar to the attached photo or track based on a specified minimum similarity level (percentage). For the correct check in lists of faces, this parameter must also be enabled |
No | ||
Face attributes recognition βeta | Not selected | The Not selected value is selected by default. If necessary, select the algorithm of the face attributes recognition βeta:
|
Algorithm 1 (Fast recognition) | ||
Algorithm 2 (Accurate recognition) | ||
Face mask detection | Yes | The Face mask detection parameter is disabled by default. If you use mask detector, select the Yes value (see Mask detector VA) |
No | ||
Maximum face height | 100 | Specify the maximum height of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100] |
Maximum face width | 100 | Specify the maximum width of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100] |
Minimum face height | 5 | Specify the minimum height of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100] |
Minimum face width | 5 | Specify the minimum width of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100] |
Minimum threshold of face authenticity | 90 | Specify the minimum level of face recognition accuracy for the creation of a track. You must select the required value empirically. We recommend specifying the value of not less than 90. The higher the value, the fewer faces are detected, while the recognition accuracy increases. The value must be in the range [1, 100] |
Detection mode | CPU | Select a processor for the detector operation—CPU or Nvidia GPU (see Selecting Nvidia GPU when configuring detectors). Attention! It can take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. |
Nvidia GPU 0 | ||
Nvidia GPU 1 | ||
Nvidia GPU 2 | ||
Nvidia GPU 3 | ||
Huawei NPU |
If necessary, in the preview window, set the rectangular area of the frame in which you want to detect faces. You can specify the area by moving the anchor points .
Note
For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again.
The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again.
To save the parameters of the detector, click the Apply button. To cancel the changes, click the Cancel button.
Configuration of the Face detector VA is complete. If necessary, you can create and configure sub-detectors on the basis of the Face detector VA based on metadata (see Metadata database):
- Line crossing—detector generates an event when a person moves across a line in the specified area of the frame and their face is detected.
- Entrance in area—detector generates an event when a person appears in the specified area of the frame and their face is detected.
- Loitering in area—detector generates an event when a person stays in the specified area of the frame for a long time and their face is detected.
- Mask detector VA—detector generates an event when it captures a face with or without a mask.