Documentation for Axxon Next 4.5.0 - 4.5.10. Documentation for other versions of Axxon Next is available too.

Previous page Camera requirements for facial detection  Setting up advanced facial detection tools Next page

To configure face detection tools, do as follows:

  1. Set the general Facial Recognition parameters.
  2. Configure  a particular detection tool.

To configure shared Facial Recognition parameters, do as follows:

  1. Select the Facial Recognition object. 
  2. If you require using this detection tool for real-time facial recognition, set the corresponding parameter to Yes (1).

  3. If you want to use this facial recognition tool in real time in parallel with FaceCube Recognition Server (see Configuring FaceCube integration), set Yes for Real-time recognition on external Service (4).

  4. If you need to enable recording of metadata, select Yes from the Record Objects tracking list (3).
  5. If a camera supports multistreaming, select the stream for which detection is needed (4). Selecting a low-quality video stream allows reducing the load on the Server.
  6. If you need to save age and gender information for each recognized face, select Yes in the corresponding field (1, see Facial recognition and search). 

  7. Select a processing resource for decoding video streams (2). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  8. If you plan to apply the masks detection tool, set Yes for the Face mask detection parameter (3, see Configuring masks detection).

  9. In some cases, the detection tool may take other object for a face. To filter out non-facial objects, select Yes in the False Detection Filtering field while calculating the vector model of a face and its recording into the metadata DB (4). If the filtering is on, false results will appear in the detection feed but will be ignored during searches in Archive.

  10. Set the time (in milliseconds) between face search operations in a video frame in the Period of face search field (5). Acceptable values range: [1, 10000]. Increasing this value decreases the Server load, but can result in some faces being missed. 
  11. Analyzed framed are scaled down to a specified resolution (6, 1280 pixels on the longer side). This is how it works:

    1. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.

    2. If the resulting resolution falls below the specified value, it is used further.

    3. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.

      Note

      For example, the source image resolution is 2048 * 1536, and the limit is set to 1000.

      In this case, the source resolution will be divided two times (down to 512 * 384): after the first division, the number of pixels on the longer side exceeds the limit (1024 > 1000).

  12. Specify the minimum and maximum sizes of detectable faces in % of the frame size (7). 

  13. In the Minimum threshold of face authenticity field, set the minimum level of face recognition accuracy for the creation of a track (8). You can set any value by trial-and-error; no less than 90 is recommended. The higher the value is, the fewer faces are detected, while recognition accuracy increases.

  14. Select the processor for the face detection - CPU or NVIDIA GPU (9). 

    Attention!

    It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).

  15. If you use FaceCube integration (see Configuring FaceCube integration), activate the Send face image parameter (10).

  16. Enter the time in milliseconds after which the face track is considered to be lost in the Track loss time field (11). Acceptable values range: [1, 10000]. This parameter applies when a face moves in a frame and gets obscured by an obstacle for some time. If this time is less than the set value, the face will be recognized as the same.

  17. When using wide angle dual lens XingYun devices, the detector will analyze two 180° spherical images by default (see Configuring fisheye cameras). This may decrease recognition quality. To de-warp the image before detection, select Yes for the Use Camera Transform parameter (12). This parameter works as well for other types of image transformation. 

  18. Select a rectangular area to be searched for faces in the preview window. To select the area, move the intersection points .

  19. Click the Apply button.

Configuration of general parameters for Facial Recognition is now complete.

  • No labels