Note |
---|
|
To start and correctly operate the Face detector VA on GPU, you must create cache beforehand (see Optimizing the operation of the Face detector VA on GPU in Windows OS). |
To configure the Face detector VA, do the following:
Go to the Detectors tab.
- Below the required camera, click Create… → Category: Face → Face detector VA.
By default, the detector is enabled and set to detect faces.
If necessary, you can change the detector parameters. The list of parameters is given in the table:
Parameter | Value | Description |
---|
Object features |
Check in lists | Yes | The |
Check in lists parameter is disabled by default. If you want to use this detector to check in lists of faces, select the Yes value ( |
see see Checking in lists of faces) |
No |
Record objects tracking | Yes | The metadata of the video stream is recorded to the database by default. To disable the parameter, select the No value |
No |
Video stream | Main stream | If the camera supports multistreaming, select the stream for which detection is needed. For the correct operation of the Face detector |
VATV, we recommend using a high-quality video |
streamSecond stream |
Other |
Enable | Yes | The detector is enabled by default. To disable the detector, select the No value |
No |
Name | Face detector |
VATV | Enter the detector name or leave the default name |
Decoder mode | Auto | Select a processor for decoding video. When you select: - Auto: GPU takes priority (decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding
|
will use - uses the Intel Quick Sync Video technology. Otherwise, CPU resources
|
will be - are used for decoding;
- CPU:
|
CPU Liveness detection- CPU is used for decoding;
- GPU: GPU is used for decoding (decoding with Nvidia NVDEC chips);
- HuaweiNPU: HuaweiNPU is used for decoding
|
CPU |
GPU |
HuaweiNPU |
|
Type | Face detector TV | Name of the detector type (non-editable field) |
Use camera transform |
The Liveness detection parameter The parameter is disabled by default |
. Liveness detection is used when there is a photo of a face instead of an alive person want to save to the database the information about whether each captured face is a photo (Yes/No), select the Yes valueuse a XingYun bispherical lens (see Configuring fisheye cameras), by default the detector receives the image of two spheres of 180° each to analyze. In this case, the recognition quality can deteriorate. To send the dewarped image to the detector, select the Yes value. This parameter is also valid for other transformations |
No |
Advanced settings
Note |
---|
| You must perform the advanced configuration of the detector only with the assistance of AxxonSoft technical experts. |
|
Analyze face rotation angle | Yes | |
No |
Liveness threshold | 30 | Specify the threshold value in percent at which a face is defined as liveless. The higher the value, the fewer captured faces are detected as a photo. At the same time, the quality of recognizing whether the captured face is a photo (Yes/No) is higher. You must select the required value empirically. The value must be in the range [1, 100] |
Type | Face detector VA | Name of the detector type (non-editable field) |
Use camera transform | Yes | The Use camera transform parameter is disabled by default. If you |
use a XingYun bispherical lens (see Configuring fisheye cameras), by default the detector receives the image of two spheres of 180° each to analyze. In this case, the recognition quality can deteriorate. To send the dewarped image to the detector, select the Yes value. This parameter is also valid for other transformationsNo |
Advanced settings
Note |
---|
| Advanced configuration of the detector must be performed only with the assistance of AxxonSoft technical experts. |
|
Analyze face rotation angle | want to detect the face rotation angle, select the Yes value. This parameter allows you to filter out results that have a rotation and tilt angle greater than the specified values in a search for a specific face (see Search for similar face)
|
Yes | The Analyze face rotation angle parameter |
|
is disabled by default. If you want to detect the face rotation angle, select the Yes value. This parameter allows you to filter out results that have a rotation and tilt angle greater than the specified values in a search for a specific face (see Search for similar face) Info |
---|
|
The Analyze face rotation angle parameter affects the filtering of face detection - detection—all detected faces are displayed regardless of the face angle settings, even if the Analyze face rotation angle parameter is enabled.
- Triggered
|
|
specified detection- specified detection—only faces that are within allowable rotation angles specified in the settings of the detector are displayed.
|
|
No |
Face detection algorithm | ALG1 (high speed, low accuracy) | Select the face detection algorithm: - ALG1 (high speed, low accuracy) is the fastest, reduced accuracy. Recognition speed depends on the number of faces in the frame. Optimal for many scenes.
- ALG2 (medium speed, medium accuracy)—recognition speed depends on the background and the number of faces in the frame.
- ALG3 (low speed, high accuracy) is the most accurate, reduced speed. Recognition speed depends on the image resolution
|
ALG2 (medium speed, medium accuracy) |
ALG3 (low speed, high accuracy) |
Face detection period (msec) | 250 | Specify the time in milliseconds between face search operations in a video frame. The larger the value, the less load on the |
Serverserver, but some faces may not be recognized. The value must be in the range [1, 10 000] |
Face |
rotation pitch ( ° occlusion threshold (starting with Detector Pack 3.14) |
45 allowable angle of face pitch in degrees. You must select the required face occlusion threshold at which a face that is partially or fully overlapped by objects (for example, glasses, masks, hair, and so on) is ignored. By default, the parameter is disabled (the value is 0). At any value greater than 0, the parameter is considered enabled. Select the appropriate value empirically. The value must be in the range [0, |
90 roll pitch ( ° ) | 45 | Specify the allowable angle of face |
roll pitch in degrees. You must select the required value empirically. The value must be in the range [0, 90] |
Face rotation |
yaw from - minimum rotation to the right or leftroll in degrees. You must select the required value empirically. The value must be in the range [ |
-90 to maximum minimum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90] |
False Face rotation yaw to ( ° ) | 45 | Specify the maximum allowable angle of face rotation to the right or left. You must select the required value empirically. The value must be in the range [-90, 90] |
False mask detections filtering | Yes | The |
False mask detections filtering parameter is enabled by default. |
Additional The additional algorithm considers that the track is a face of a person with a mask. If the algorithm considers that the track is a face with a mask with a lower probability, this track is disregarded (see the Minimum filtering threshold for face mask detection parameter). |
If If you don’t need to use the parameter, select the No |
value value |
No |
Filter false alarms | Yes | In some cases, the detector can mistake other objects for a face. |
The The Filter false alarms parameter is enabled by default. If the parameter is enabled, |
false false results of the face recognition appear in the detection feed ( |
see see Face recognition and search), but they are ignored when searching for faces in the archive. This parameter filters out objects that |
are not aren't faces at the stage of face vector construction and its recording to the metadata database. If you don’t need to use this parameter, select the No value |
No |
Frame size change | 1920 |
The analyzed frames are scaled down to a By default, during the analysis, the frame is compressed to the specified resolution (1920 pixels on the longer side |
) , by default). The value must be in the range [640, 10 000]. The following algorithm is used: - If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.
- If the resulting resolution falls below the specified value, the algorithm stops, and this resolution
|
will be - is used further.
- If the resulting resolution still exceeds the specified value, it is divided by two until it is less than the specified resolution
|
. Info |
---|
| For example, the source image resolution is 2048*1536, and the specified value is set to 1000. In this case, the source resolution |
|
will be is halved two times (512*384), |
|
as since after the first division, the number of pixels on the longer side exceeds the specified value (1024 > 1000). If detection is performed on a higher resolution stream and detection errors occur, we recommend reducing the compression. |
|
Ignore repeated recognitions | Yes |
The Ignore repeated recognitions parameter The parameter is disabled by default. If you want to ignore repeated recognitions of the same face, select the Yes value |
No |
Minimum face masking 7030 | Specify the minimum threshold |
for recognizing a mask on the facevalue in percent at which a face is defined as liveless. The higher the value, the more captured faces are detected as a photo. At the same time, the quality of recognizing (photo/real face) is higher. You must select the required value empirically |
. We recommend specifying the value of not less than 70. The value must be in the range [1, 100] |
Minimum face |
quality for face mask detection30 quality of the face image when recognizing masks (see Masks Detection)threshold for recognizing a mask on the face. You must select the required value empirically. We recommend specifying the value of not less than |
3070. The value must be in the range [1, 100] |
Minimum face |
recognition quality for face mask detection |
5030 | Specify the minimum quality of the face image when recognizing |
faces without masksmasks (see Mask detector VA). You must select the required value empirically. We recommend specifying the value of not less than |
5030. The value must be in the range [1, 100] |
Minimum |
filtering thresholdface recognition quality | 50 |
If you enable the Filter false alarms parameter, specify the minimum percentage at which the additional algorithm considers the track to be a person's face. If the algorithm considers the track to be a face with a lower probability, this track is disregardedSpecify the minimum quality of the face image when recognizing faces without masks. You must select the required value empirically. We recommend specifying the value of not less than 50. The value must be in the range [1, 100] |
Minimum filtering threshold |
for face mask detection30 enable the False mask detections filtering use the Filter false alarms parameter, specify the minimum percentage at which the additional algorithm considers the track to be a person's face |
with a mask. If the algorithm considers the track to be a face with a |
mask with a lower probability, this track is disregarded. You must select the required value empirically. We recommend specifying the value of not less than |
3050. The value must be in the range [1, 100] |
Period of ignoring repeated recognitions | 2 | To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the period in minutes during which new recognized faces are compared with the previous ones to detect similarities. The value must be in the range [2, 30] |
Process color frames | Yes | A black and white frame is processed by default. If you want the detector to use a color frame for processing, select the Yes value |
No |
Repeated recognitions similarity threshold | 85 | Minimum filtering threshold for face mask detection | 30 | If you use the False mask detections filtering parameter, specify the minimum percentage at which the additional algorithm considers the track to be a person's face with a mask. If the algorithm considers the track to be a face with a mask with a lower probability, this track will be disregarded. You must select the required value empirically. We recommend specifying the value of not less than 30. The value must be in the |
To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the similarity threshold of the face to the previously recognized faces in percent. If the similarity threshold is lower than the specified value, the face is recognized as a new one. The value must be in the Send face images | Yes | The Send face images parameter is disabled by default. If you want to send face images to Axxon PSIM, select the Yes value |
No |
Track loss time | 500 | Specify the time in milliseconds after which the track of the captured face is considered to be lost. This parameter applies when a captured face moves in the frame and gets hidden behind an obstacle for some time. If this time is less than the specified value, the face is recognized as the same. The value must be in the range [1, 10 000] |
Basic settings |
Age and gender | Yes | The Age and gender parameter is disabled by default. If you need to save age and gender information for each captured face in the database, select the Yes value Info |
---|
| The average error in age recognition is 5 years. |
|
No |
Algorithm of liveness detectionNot selected | The Not selected value is selected by default. Liveless face is a photo of a face instead of a live person. If necessary, select the algorithm for detecting a liveless face:
Algorithm 1 (Photo I) is used for processors with small processing power in access control systems (ACS). A simplified, fast box model is used to speed up processing.Open eyes threshold (starting with Detector Pack 3.14) | 0 | Specify the open eyes threshold at which a face is recognized. By default, the parameter is disabled (the value is 0). At any value greater than 0, the parameter is considered enabled. Select the appropriate value empirically. The value must be in the range [0, 100] |
Period of ignoring repeated recognitions | 2 | To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the period in minutes during which new recognized faces are compared with the previous ones to detect similarities. The value must be in the range [2, 30] |
Process color frames | Yes | A black and white frame is processed by default. If you want the detector to use a color frame for processing, select the Yes value |
No |
Repeated recognitions similarity threshold | 85 | To adjust this parameter, you must enable the Ignore repeated recognitions parameter. Specify the similarity threshold of the face to the previously recognized faces in percent. If the similarity threshold is lower than the specified value, the face is recognized as a new one. The value must be in the range [1, 100] |
Send face images | Yes | The parameter is disabled by default. If you want to send face images to Axxon PSIM, select the Yes value |
No |
Track lifespan (starting with Detector Pack 3.14) | Yes | By default, the parameter is disabled. If it is necessary to display track lifespan for an object in seconds, select the Yes value |
No |
Track loss time (msec) | 500 | Specify the time in milliseconds after which the track of the captured face is considered to be lost. This parameter applies when a captured face moves in the frame and hides behind an obstacle for some time. If this time is less than the specified value, the face is recognized as the same thing. The value must be in the range [1, 10 000] |
Basic settings |
Age and gender | Yes | The parameter is disabled by default. If you need to save age and gender information for each captured face in the database, select the Yes value
Info |
---|
| The average error in age recognition is 5 years. |
|
No |
Algorithm of liveness detection | Not selected | Note |
---|
| The algorithm is sensitive to the position of the face in the frame and optic distortion. The algorithm operates correctly only when a face is located in the center of the frame where distortion is minimal. The lower the distortion in the frame, the higher the accuracy of the algorithm operation. |
The Not selected value is selected by default. Liveless face is a photo of a face instead of a live person. You can use a photo, mask, or video as an image. If necessary, select the algorithm for detecting a liveless face: - Algorithm 1 (Photo I) is used for processors with small processing power in access control systems (ACS). A simplified, fast box model is used to speed up processing.
- Algorithm 2 (Photo I, Textures I) is used for most other ACSs, including a combination of a fast box model and texture model.
- Algorithm 3 (Photo II, Textures II) is used for remote authorization scenarios. It uses a more detailed and slower box model with a slower texture model, which improves verification accuracy.
- Algorithm 4 (Comprehensive) is a mode for fine-tuning for specific conditions. We recommend using this mode if you can adjust the parameters to a specific domain and select threshold values. This approach
|
improves - increases accuracy, but the threshold can vary significantly depending on the conditions.
You must configure the parameter empirically. Algorithm for determining a liveless face: The algorithm assigns to each face a Liveness score—a value in the range [0, 100]. The decision on the liveness of a face is made based on the value specified in the Liveness threshold parameter according to the following logic: - liveless if a Liveness score is less than or equal to the threshold value of the parameter.
- live if a Liveness score exceeds the threshold value of the parameter.
Example: - The Liveness threshold parameter = 30.
- In the logs, the algorithm assigned a Liveness score = 78.
- Since 78 > 30, the face is determined as live
|
Algorithm 1 (Photo I) |
Algorithm 2 (Photo I, Textures I) |
Algorithm 3 (Photo II, Textures II) |
Algorithm 4 (Comprehensive) |
Biometric data | Yes | The |
Biometric data disabled , meaning that when searching for faces (see Face search) by . When you search for faces (see Face search), the search results display found faces that are similar to an attached photo or |
a track, as well as when checking track, taking into account the specified minimum level of similarity (in percent). For the correct check in lists of faces, |
there will be no resultsyou must also enable this parameter. If you want to |
use face search and check in lists of faces abilitykeep the biometric data private, select the |
Yes No value. In this case, when |
searching see Face search), the search results will contain found faces that are similar to the attached photo or a track based on the specified minimum similarity level (percentage). Accurate check in lists of faces will be performed as wellsee Face search) using an attached photo or track as well as check in lists of faces, the search doesn't return any result |
No |
Detection mode
| CPU | Select a processor for the detector operation—CPU or Nvidia GPU (see Selecting Nvidia GPU when configuring detectors)
Note |
---|
| It can take several minutes to launch the algorithm on an Nvidia GPU after you apply the settings. |
|
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Huawei NPU |
Face attributes recognition |
No |
Face attributes recognition βeta
| Not selected | The Not selected value is selected by default. If necessary, select the |
algorithm of the face βeta face attributes recognition |
βetaalgorithm: - Algorithm 1 (Fast recognition) is used for fast recognition and assessment.
- Algorithm 2 (Accurate recognition)
|
is - is used for a more accurate recognition and assessment
|
Algorithm 1 (Fast recognition) |
Algorithm 2 (Accurate recognition) |
Face mask detection | Yes | The |
Face mask detection parameter is disabled by default. If you use mask |
detectiondetector, select the Yes value ( |
see Masks Detectionsee Mask detector VA) |
No |
Maximum face height | 100 | Specify the maximum height of the captured faces as a percentage of the frame size. You must select the required value empirically. The value must be in the range [1, 100] |
Maximum face width | 100 | Specify the maximum width of the captured faces as a percentage of the frame size. |
You You must select the required value empirically. The value must be in the range [1, 100] |
Minimum face height | 5 | Specify the minimum height of the captured faces as a percentage of the frame size. |
You You must select the required value empirically. The value must be in the range [1, 100] |
Minimum face width | 5 | Specify the minimum width of the captured faces as a percentage of the frame size. |
You You must select the required value empirically. The value must be in the range [1, 100] |
Minimum threshold of face authenticity | 90 | Specify the minimum level of face recognition accuracy for the creation of a track. You must select the required value empirically. We recommend specifying the value of not less than 90 |
. The higher the value, the fewer faces are detected, while the recognition accuracy increases value must be in the range [1, 100]Detection mode | CPU | Select a processor for the detector operation—CPU or Nvidia GPU (see Selecting Nvidia GPU when configuring detectors).
Note |
---|
|
It can take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. |
Nvidia GPU 0 |
Nvidia GPU 1 |
Nvidia GPU 2 |
Nvidia GPU 3 |
Huawei NPU |
higher the value, the fewer faces are detected, while the recognition accuracy increases. The value must be in the range [1, 100] |
In If necessary, in the preview window, set the rectangular area of the frame in which you want to detect faces. You can specify the area by moving the anchor points points
Image Modified.
Image Modified
Info |
---|
|
- For convenience of configuration, you can "freeze" the frame. Click the
Image Modified button. To cancel the action, click this button again. - The detection area is displayed by default. To hide it, click the
Image Modified button. To cancel the action, click this button again.
|
To save the parameters of the detector, click the Apply
Image Modified button. To cancel the changes, click the Cancel
Image Modified button.
Configuration of the Face detector VA is complete. If necessary, you can create and configure sub-detectors on the basis of the Face detector VA based on metadata (see Metadata database):
- Line
Crossing- crossing—detector generates an event when a person moves across a line in the specified area of the frame and their face is detected.
Entrance In Area- Entrance in area—detector generates an event when a person appears in the specified area of the frame and their face is detected.
- Loitering
In Area- in area—detector generates an event when a person stays in the specified area of the frame for a long time and their face is detected.
Masks Detection- Mask detector VA—detector generates an event when it captures a face with or without a mask.