Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Some parameters can be bulk configured for Situation Analysis Scene Analytics detection tools simultaneously. To configure them, do as follows: 

  1. Select the Object Tracker tracker object.

    Image Modified
  2. By default, video stream 's metadata are recorded in to the database. You can disable it the recording by selecting selecting No in the Record object objects tracking list (1).

    Note
    titleAttention!

    Video decompression and analysis are used to obtain metadata, which causes high Server load and limits the number of video cameras that can be used on it.


  3. If a the video camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.

    Note
    titleAttention!

    To display oject trajectories the object tracks properly, make sure that all video streams from multi-streaming multistreaming camera have the same aspect ratio settings.


  4. If you require automatic adjustment of need to automatically adjust the sensitivity of the scene analytic analytics detection tools, select Yes in the Auto Sensitivitysensitivity list , select Yes (3).

    Info
    titleNote

    Enabling It is recommended to enable this option is recommended if the lighting fluctuates significantly in the course of the video camera 's operation (for example, in outdoor conditions).


  5. To reduce the number of false alarms rate positives from a fish-eye camera, you have to position it properly (4).   For other devices, this parameter is not valid.

  6. Select a processing resource for decoding video streams (5). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVIDIA NVDEC chips). If there 's is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  7. In the Motion detection sensitivity field (6), set the sensitivity for motion detection tools, on a scale of 1 to 100of the scene analytics detection tools to motion in the range [1; 100].
  8. To correct for smooth camera shake, set Antishaker  to Yes set Yes for the Antishaker parameter (7).   This setting parameter is recommended only for cameras that show clear signs of shaking-related image degradationto use only when the camera shake is evident.
  9. Analyzed framed are scaled down to a specified resolution (8, 1280 pixels on the longer side). This is how it works:

    1. If the longer side of the source image exceeds the value specified in the Frame size change field, it is divided by two.

    2. If the resulting resolution falls below the specified value, it is used further.

    3. If the resulting resolution still exceeds the specified limit, it is divided by two, etc.

      Info
      titleNote

      For example, the source image resolution is 2048*1536, and the

      limit

      specified value is set to 1000.

      In this case, the source resolution will be

      divided

       halved two times (

      down to

      512*384)

      :

      , as after the first division, the number of pixels on the longer side exceeds the limit (1024 > 1000).


      Info
      titleNote

      If detection is performed on a higher resolution stream and detection errors occur, it is recommended to reduce the compression.


  10. Enter the time interval in seconds, during which  which object 's properties will be stored in the the Time of Object object in DB  field DB field (9). If the object leaves and enters the FoV FOV within the specified time, it will be identified as one and the same object (same ID).  
  11. If necessary, configure the neural network filter (see Hardware requirements for neural analytics operation). The neural network filter processes the results of the tracker and filters out false alarms positives on complex video images (foliage, glare, etc.).

    Note
    titleAttention!

    A neural network filter can be used either for analyzing moving objects, or for analyzing abandoned objects only. You cannot operate two neural networks simultaneously.

    1. Enable the filter by selecting Yes (1).
      Image Modified

    2. Select the processor for the neural network CPU, one of GPUs or a IntelNCS, or one of Intel GPUs (2). , see Hardware requirements for neural analytics operation, General Information on Configuring Detection).

      Tip

      Camera requirements for neural filter operation


      Note
      titleAttention!

      It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.


    3. Select a neural network (3). To access a neural network, contact AxxonSoft technical support. If no neural network file is specified, or the settings are incorrect, no filtering will occur.
    Note
    titleAttention!

    A neural network filter can be used either only for analyzing moving objects, or only for analyzing abandoned objects. You cannot operate  two neural networks simultaneously.

    1. the filter will not operate.


  12. Click the Apply button.

The general parameters of the situation analysis Scene Analytics detection tools are now set.