Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Tip

Video stream and scene requirements for pose Pose detection tooltools

Objects image requirements for pose Pose detection tool

Hardware requirements for neural analytics operation

To configure the common parameters for pose Pose detection tools, do as follows:

  1. Select the Pose Detectiondetection object.
    Image Modified
  2. By default, video stream 's metadata are recorded in to the database. You can disable it by selecting selecting No in the Record object objects tracking list (1). Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it.
  3. If a the camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
  4. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVIDIA NVDEC chips). If there 's is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  5. Set the frame rate value for the detection tool to process per second (4). This value should be in the range [0.,016, ; 100]. 

    Note
    titleAttention!

    With static individuals in scene, set the FPS to no less than 2. With moving individuals in scene, the FPS should be set to 4 and above. 

    The higher the FPS

    For scene with static people, the fps value should be at least 2. For scene with moving people — at least 4.

    The larger this value, the higher the accuracy of pose detection, but the greater load on the CPU load. With fpsis higher as well. For FPS=1, the accuracy will be no less than 70%.

    This parameter varies depending on the object speed of movement. To solve typical tasks,

    fps

    FPS value from 3 to 20 is sufficient. Examples:

    • pose detection for moderately moving objects (without sudden movements)
    — fps=
    • FPS 3;
    • pose detection for moving objects
    — fps=
    • FPS 12.


  6. Select the processor for the neural network CPU, one of NvidiaNVIDIA GPUs or one of Intel GPUs (5, see Hardware requirements for neural analytics operation, General Information on Configuring Detection).

    Note
    titleAttention!

    If you specify other processing resource than the CPU, the selected device will carry the most of computing load. However, the detection tool will consume CPU as well.

    Note
    titleAttention!

    It may take several minutes to launch the algorithm on an Nvidia GPU NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).


    Note
    titleAttention!

    If you specify other processing resource than the CPU, the selected device will carry the most of computing load. However, the CPU will also be used to run the detection tool.


    Note
    titleAttention!

    Man - down or sitting - pose detection accuracy may depend on the particular processor. If another selected processor gives less accurate results, set the detection parameters empirically, and configure scene perspective (see Specific settings for the Man down detection tool, Specific settings for the Sitting person detection tool).


  7. Select a neural network file (6).

    Info
    titleNote

    To ensure the correct operation of the neural network on Linux OS, the corresponding file should be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory.


  8. By default, the entire FoV FOV is an area for detection. If necessary, you can specify the areas for detection and skip areas in the preview window. To set an area for detection, right-click anywhere on the image, and select a desired the required area. 
    Image RemovedImage Added

    Info
    titleNote

    The areas are set the same way as for the Scene Analytics detection tools (see Configuring the Detection Zone).

    This is how it works:

    1. if you specify areas specify areas for detection only, no detection will be performed in the rest of FoVFOV.

    2. if you specify skip areas only, the detection will be performed in the rest of FoVFOV.

    Select a neural network file (6). 
    1. Image Added

  9. Select the desired required detection tool (7).
    Image RemovedImage Added
  10. Set the minimum number of frames with a human in a pose of interest for triggering the toolrelevant pose or behavior for the tool to trigger (8).

    Info
    titleNote

    The default values (2 frames and 1000 milliseconds) indicate that the tool will analyze one frame every second. When a pose is a pose is detected in 2 consequent subsequent frames, the tool will trigger.


    Info
    titleNote

    This parameter is not used for when configuring people masking settings.


  11. Click the Apply button.

Setting up the common parameters for the Pose detection tools is complete.