Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Tip

Video stream and scene requirements for neural counter operation

Hardware requirements for neural analytics operation

To configure Neurocounter, do the following:

  1. To record mask (highlighting of recognized objects) to archive, select Yes for the corresponding parameter (1).
  2. If a the camera supports multistreaming, select the stream to apply the detection tool to for which detection is needed (2). 
  3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia NVIDIA NVDEC chips). If there 's is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  4. If you need to outline objects in the preview window, select Yes in for the Detected objects parameter (4).
  5. Set the recognition threshold for objects in percent (5). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity, but some triggers may not be considered.
  6. Set the frame rate value for the detection tool to process per second (6). This value should be in the range [0.016, 100]. 

    Set the minimum number of frames with excessive numbers of objects for Neurocounter to trigger (10). The value should be within the range of 2 – 20.

    Info
    titleNote

    The default values (3 output frames and 1 fps FPS) indicate that Neurocounter will analyze one frame every second and if it . If Neurocounter detects more objects than the specified threshold value on 3 frames, then it triggers.


  7. Select the processor for the neural network - CPU, one of GPUs, or Intel NCS (7, see Hardware requirements for neural analytics operation, General Information on Configuring Detection). 

    Note
    titleAttention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run Neurocounter.


    Note
    titleAttention!

    It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.  You You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).

    Note
    titleAttention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the detection tool will consume CPU as well.


  8. In the  Object type field  field (11), select the object type for counting, or in the the Neural network file field (8), select the neural network file.

    Infonote
    titleNoteAttention!

    To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).

    A trained neural network does a great job for a particular scene if allows you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.). 

    If the neural network file is not specified, the default file will be used, which is selected depending on the selected object type (11) and the selected processor for the neural network operation (47).


    Info
    titleNote

    For correct neural network operation under on Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/ directory.


  9. Set the triggering condition for the neurocounterNeurocounter:

    1. In the Number of alarm objects field field, set the threshold value for the number of objects in FoV FOV (9). 

    2. In the Trigger upon count field, select the condition polarity: whether triggering should occur on exceeding the threshold, or dropping below it when you want to generate the trigger when the number of objects in the detection zone is greater or less than the threshold value (12). 

  10. Set the minimum number of frames with the excessive numbers of objects for Neurocounter to trigger (10). The value should be within the range of [2; 20].

  11. In the preview window, you can set the detection zones with the help of anchor points much like privacy masks in Scene Analytics (see Setting General Zones for Scene Analytics). By default, the entire

    FoV

    FOV is a detection zone.
    Image Modified

  12. Click the Apply button.

After the neural tracker is created, the layout can it is possible to display the sensor icon with and the number of objects within in the controlled areazone in the video surveillance window on the layout. To configure this option, please follow do the steps belowfollowing:

  1. Proceed Switch to the Layout Editing mode (see Switching to layout editing mode).
  2. Place the sensor anywhere in FoVFOV.
  3. Customize the font. To do this, press the Image Added button Image Removed.
  4. Save the layout (see Exiting layout editing mode). As a result, the sensor and the number of objects will be displayed in the selected spot: