Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width50%
Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:

Table of Contents

Column


Tip

Video stream and scene requirements for neural the Neural counter operation

Image requirements for the Neural counter operation

Hardware requirements for neural analytics operation

Optimizing the operation of neural analytics on GPU in Windows OS

Optimizing the operation of neural analytics on GPU in Linux OS

Configuring the detection tool

To configure Neural counterTo configure a Neurocounter, do the following:

  1. Go to the Detection Tools tab.
  2. Below the required camera, click Create…  Category: Retail → Neural counter.

  3. To record mask (highlighting of recognized objects) to the archive, select Yes for the corresponding parameter (1).
    Image Removed

By default, the detection tool is enabled and set to count the number of objects in a specified area using neural network.

If necessary, you can change the detection tool parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Record mask to archiveYesBy default, the recording of the mask to the archive is disabled. To record the sensitivity scale of the detection tool to the archive (see Displaying information from a detector (mask)), select the Yes value
No
Video streamMain streamIf the camera supports multistreaming,

...

 select the stream for which detection is needed

...

Other
EnableYesBy default, the detection tool is enabled. To disable, select the No value
No
NameNeural counterEnter the detection tool name or leave the default name
Decoder modeAutoSelect a processing resource for decoding video streams

...

. When you select a GPU, a stand-alone graphics card takes priority (when decoding with

...

Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding

...

CPU
GPU
HuaweiNPU
Number of frames processed per second1

Specify the number of frames

...

for the detection tool to process per second

...

.

...

The value

...

must be in the range [0

...

.016, 100]

...

Info
titleNote

The default values (3 output frames and 1 FPS)

...

mean that Neural counter analyzes one frame

...

once per second. If

...

Neural counter detects the specified number of objects (or more) on 3 frames,

...

Select the processor for the neural networkCPU, one of NVIDIA GPUs or one of Intel GPUs (6, see Hardware requirements for neural analytics operation, General Information on Configuring Detection).

Note
titleAttention!
  • If you specify other processing resource than the CPU, this device will carry the most of computing load. However, the CPU will also be used to run Neurocounter.
  • It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).

an event from the detection tool is generated.

TypeNeural counterName of the detection tool type (non-editable field)
Advanced settings
Detected objects YesBy default, detected objects aren't highlighted in the preview window. If you want to highlight detected objects, select the Yes value
No
Neural network file

If you use a custom

Set the triggering condition for Neurocounter:

...

In the Number of alarm objects field (7), set the threshold value for the number of objects in the frame. This value should be in the range [0; 100].

In the Trigger upon count field (9), select when you want to generate a triggerwhen the number of objects in the detection area is:

...

Less than or equal to threshold value.

Info
titleNote

Neurocounter will generate a trigger from the specified threshold (9).

...

  1. Human.
  2. Human (top view).
  3. Vehicle.
  4. Human and Vehicle (Nano)low accuracy, low processor load.
  5. Human and Vehicle (Medium)medium accuracy, medium processor load.
  6. Human and Vehicle (Large)high accuracy, high processor load.

...

neural network, select the corresponding file

...

Note
titleAttention!
  • To train your neural network, contact AxxonSoft (

...

...

  • for example, a person, a cyclist, a motorcyclist,

...

  • and so on).
  • If the neural network file is not specified, the default file

...

  • is used

...

  • that is selected automatically depending on the selected

...

  • value in the Detection neural network parameter and the selected processor for the neural network operation

...

  • in the Decoder mode parameter. If you use a custom neural network, enter a path to the file. The selected

...

  • detection neural network is ignored when you use a custom neural network.

...

  • To ensure the correct operation of the neural network

...

  • on Linux OS,

...

  • the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. 
  • If you use standard neural network (training wasn't performed in operating conditions), we guarantee the overall accuracy of 80-95% and the percentage of false positives of 5-20%. The standard neural networks are located in the C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK directory.

...

Number of measurements in a row to trigger detection3Specify

...

the minimum number of frames on which

...

the detection tool must detect a violation for the detection tool to generate an event. The value

...

must be in the range [

...

1, 20]

...

Selected object classes

If necessary, specify the class of the detected object

...

.

...

 If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
The numerical values of classes for the

...

embedded neural networks: 1—Human/Human (top-down view),

...

10—Vehicle.

    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (

...

    1. Detection neural network, 

...

    1. Neural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (

...

    1. Detection neural network, 

...

    1. Neural network file).
    2. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (

...

    1. Detection neural network, 

...

    1. Neural network file).
    2. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (

...

    1. Detection neural network, 

...

    1. Neural network file)

...


    1. Info
      titleNote

      Starting

...

    1. with Detector Pack

...

    1. 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (

...

    1. Detection neural network, 

...

    1. Neural network file).

...

In the preview window, you can set the detection areas with the help of anchor points much like privacy masks in Scene Analytics detection tools (see Setting General Zones for Scene analytics detection tools). By default, the entire frame is a detection area.

...

Scanning windowYes

If detection of small objects or objects in areas far away from the camera is ineffective, you can use the scanning mode. The scanning mode doesn’t provide absolute detection accuracy, but it can improve detection performance. To enable the scanning mode, select the Yes value (see Configuring the scanning mode)

No
Scanning window height0The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
Scanning window step height0

The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window respectively, the segments will line up one after another. Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load.

Note
titleAttention!

The height and width of the scanning step must not be greater than the height and width of the scanning window—the detection tool will not operate with such settings.

Scanning window step width0
Scanning window width0The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
Basic settings
Detection threshold30Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some events from the detection tool may not be considered. The value must be in the range [0.05, 100]
ModeCPU

Select a processor for the neural network operation (seeHardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detectors).

Note
titleAttention!


Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel NCS (not supported)
Intel HDDL (not supported)
Intel GPU
Huawei NPU
Number of alarm objects5

Specify the number of objects at which an event occurs. The value must be in the range [0, 100]

Detection neural networkPersonSelect the detection neural network from the list. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (NanoMediumLarge), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition
Person (top-down view)
Person (top-down view Nano)
Person (top-down view Medium)
Person (top-down view Large)
Vehicle
Person and vehicle (Nano)
Person and vehicle (Medium)
Person and vehicle (Large)
Trigger toGreater than or equal to threshold valueSelect when you want to generate an event. The Neural counter will generate events from the threshold value set in the Number of alarm objects field
Less than or equal to threshold value
Change in readings

By default, the entire frame is the detection area. In the preview window, you can specify the detection areas using the anchor points Image Added (see Configuring a detection area).

Image Added

Info
titleNote

For convenience of configuration, you can "freeze" the frame. Click the Image Added button. To cancel the action, click this button again.

The detection area is displayed by default. To hide it, click the Image Added button. To cancel the action, click this button again.

To save the parameters of the detection tool, click the Apply Image Added button. To cancel the changes, click the Cancel Image Added buttonConfiguring a Neurocounter is complete.

It is possible to display the sensor and the number of objects in the controlled monitored area in the Surveillance window on the layout . To configure this option, do the following:

...

(see Displaying the number of detected objects). 

Example of configuring Neural counter for solving typical tasks

By default, the Neural counter is set to detect objects with a speed less than 0.3 m/s:

ParameterValue
Other
Number of frames processed per second1
Advanced settings
Number of measurements in a row to trigger detection3
Neural network filePath to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank
Basic settings
Detection threshold30

To solve tasks in which object speed differs from 0.3 m/s, you must increase Number of frames processed per second or/and decrease Number of measurements in a row to trigger detection. You must select the values empirically depending on the task conditions. 

...