Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. To record the sensitivity scale of the detection tool to the archive (see Extra information overlay (Masks)), select select Yes for the  for the Record mask to archive parameter parameter (1).

  2. If a the camera supports multistreaming, select the stream for which detection is needed (2). Selecting a low-quality video stream allows reducing the load on the Server.
  3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVIDIA NVDEC chips). If there 's is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  4. Set the frame rate value for the detection tool to process per second (4). The value should be in the [0,016, ; 100] range.

    Info
    titleNote

    The default values (5 frames for output and 0.,1 fpsFPS) indicate that the tool will analyze frame over 50 seconds span. The detection tool samples analyzes 1 frame per every 10 seconds. If it detects smoke/fire on 5 consecutive fames, it triggers the detection tool will trigger an alert.


  5. Select the processor for the neural network — CPU, one of  Nvidia  GPUs or one of Intel GPUs (5, seeHardware requirements for neural analytics operation, General Information on Configuring Detection)

    Note
    titleAttention!

    It may take several minutes to launch the algorithm on an Nvidia NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).


    Note
    titleAttention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run the detection tool will consume CPU as well.


  6. Select a neural network file (6). The following standard neural networks for different processor types are located in the C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK directory:

    smoke_movidius.ann

    Smoke detector / IntelNCS

    smoke_openvino.ann

    Smoke detector / CPU

    smoke_original.ann

    Smoke detector / GPU

    fire_movidius.ann

    Fire detector / IntelNCS

    fire_openvino.ann

    Fire detector / CPU

    fire_original.ann

    Fire detector / GPU

    Enter full path to a custom neural network file into this field. This is not required if you use standard neural networks which are selected automatically.

    Info
    titleNote

    For correct neural network operation under on Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNextDetectorPack/NeuroSDK directory. 


  7. Set the minimum number of frames with smoke (fire) for triggering the tool (7). The value should be in the [5; 20] range..
  8. To detect the objects without changing the frame size, select Yes in the Scanning mode field (8).
  9. Set You can experiment with the sensitivity of the tool by trial and error (89). The value must should be in the range [1; 99] range.   The preview window displays the sensitivity scale of the detection tool that relates to the Sensitivity sensitivity parameter. If the scale is green, smoke (fire) is not detected. If the scale is yellow, smoke (fire) is detected, but not enough to trigger the tool. If the scale is red, smoke (fire) is detected and the detection tool will trigger, if the scale is red through the sampling period (50 seconds by default, see item 4).
    Example. The sensitivity parameter value of 40 implies that the alert is triggered when the scale has been at least 4 graduations divisions full over the entire detection time span (50 sec by default, see i.4). The triggering stops will stop when the scale has been less than 2 graduations full during analysis timedivisions full over the detection time span. The alert is triggered will trigger again if the scale has been at least 4 graduations divisions full over the entire detection time span.
  10. Select Select Yes for the  for the Ignore black and white image parameter parameter (910), if it is necesssary necessary that the detection tool does not trigger when the image is black and white.
  11. By default, the detection is performed over full image area. In the preview window, you can set several detection zones by their areas using the anchor points as follows:

    1. Right-click anywhere in the Preview preview window.
    2. Select Detection area (rectangle) for a rectangular zonearea. If you specify a rectangular area, the detection tool will work only within its limits; the . The rest of the FOV will be ignored.

    3. Select Analytics Area (polygon) to set one or several polygonal zonesareas. If you specify one or several polygonal areas, the detection tool will process the entire FOV while the remaining part of the FOV will be blacked out.

      Info
      titleNote

      You can configure detection zones areas similarly to privacy masks in scene Scene analytics detection tools (see Setting General Zones for Scene Analyticsanalytics detection tools).


Note
titleImportantAttention!

You can use trial and error method to decide which type of detection area (rectangular or polygonal) is more effective in your case. Some neural networks give better detection with rectangles while others are better with polygons.

...