To configure common parameters for pose detection tools, do as follows:

  1. Select the Pose Detection object.
  2. By default, video stream's metadata are recorded in the database. You can disable it by selecting No in the Record object tracking list (1). Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it.
  3. If a camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
  4. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  5. Set the frame rate value for the detection tool to process (4). This value should be in the range [0.016, 100]. 

    Attention!

    With static individuals in scene, set the fps to no less than 2. With moving individuals in scene, 4 fps and above  is recommended. 

    The less the FPS value, the higher accuracy of pose detection for the cost of CPU load. For 1 FPS, accuracy will be no less than 70%.

    We recommend that you use trial and error method to set the appropriate value.

  6. Select the processor for the neural network - CPU, one of GPUs, or Intel NCS (5, see 신경 분석 작업을 위한 하드웨어 요구 사항).

    Attention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the detection tool will consume CPU as well.

    Attention!

    It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).

    Attention!

    Man-down or sitting-pose detection accuracy may depend on the particular processor. If another selected processor gives less accurate results, set the detection parameters empirically, and configure scene perspective.

  7. By default, the entire FoV is an area for detection. If necessary, you can specify the areas for detection and skip areas in the preview window. To set an area for detection, right click anywhere on the image, and select a desired area. 

    Note

    The areas are set the same way as for the Scene Analytics (see Configuring the Detection Zone).

    This is how it works:

    1. if you specify areas for detection only, no detection will be performed in the rest of FoV.

    2. if you specify skip areas only, the detection will be performed in the rest of FoV.

  8. Select a neural network file (6). 
  9. Select the desired detection tool.
  10. Set the minimum number of frames with a human in a pose of interest for triggering the tool.

    Note

    The default values (2 frames and 1000 milliseconds) indicate that the tool will analyze one frame every second. When a pose is detected in 2 consequent frames, the tool will trigger.

    Note

    This parameter is not used for masking settings.

  11. Click Apply.
  • No labels