Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Tip

Video stream and scene requirements for neural counter operation

Hardware requirements for neural analytics operation

To configure Neurocounter, do the following:

  1. To record mask (highlighting of recognized objects) to archive, select Yes for the corresponding parameter (1).
  2. If a camera supports multistreaming, select the stream to apply the detection tool to (2). 
  3. Select a processing resource for decoding video streams (3). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVidia Nvidia NVDEC chips). If there's no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  4. If you need to outline objects in the preview window, select Yes in the Detected objects parameter (4).
  5. Set the recognition threshold for objects in percent (5). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
  6. Set the frame rate value for the detection tool to process (6). This value should be in the range [0.016, 100]. 

  7. Set the minimum number of frames with excessive numbers of objects for Neurocounter to trigger (10). The value should be within the range of 2 – 20.

    Info
    titleNote

    The default values (3 output frames and 1 fps ) indicate that Neurocounter will analyze one frame every second and  and if it detects more objects than the specified threshold value on 3 frames, then it triggers.


  8. Select the processor for the neural network - CPU, one of Nvidia GPUs , or one of Intel NCSGPUs (7, see Hardware requirements for neural analytics operation). 

    Note
    titleAttention!

    It may take several minutes to launch the algorithm on an NVIDIA Nvidia GPU after you apply the settings.  You You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).


    Note
    titleAttention!

    If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the detection tool will consume CPU as well.


  9. In the Object type field (11), select the object type for counting, or in the Neural network file field (8), select the neural network file.

    Info
    titleNote

    To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).

    A trained neural network does a great job for a particular scene if you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.). 

    If the neural network file is not specified, the default file will be used, which is selected depending on the selected object type (11) and the selected processor for the neural network operation (4).


    Info
    titleNote

    For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/ directory.


  10. Set the triggering condition for the neurocounter:

    1. In the Number of alarm objects field, set the threshold value for the number of objects in FoV (9). 

    2. In the Trigger upon count field, select the condition polarity: whether triggering should occur on exceeding the threshold, or dropping below it (11). 

  11. In the preview window, you can set the detection zones with the help of anchor points much like privacy masks in Scene Analytics (see Setting General Zones for Scene Analytics). By default, the entire FoV is a detection zone.
  12. Click Apply.

...