Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width50%
Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:

Table of Contents

Column


Tip

Video requirements for scene analytics detection toolsVideo stream and scene requirements for Neurotracker operationObjects image requirements for Neurotrackerthe Neural tracker and its sub-detectors

Image requirements for the Neural tracker and its sub-detectors

Hardware requirements for neural analytics operation

Optimizing the operation of neural analytics on GPU in Windows OS

Optimizing the operation of neural analytics on GPU in Linux OS

Configuring the detection tool

To configure the Scene Analytics detection tools based on Neurotracker Neural tracker, do the following:

  1. Go to
  2. the 
  3. the Detection Tools
  4.  tab
  5.  tab.
  6. Below the required camera,

  7. click 
  8. click Create…  Category: Trackers 

  9. Neurotracker
  10. Neural tracker.

By default, the detection tool is enabled and set to detect moving people.

If necessary, you can change the settings of the detection tool parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Record objects trackingYes

By default, metadata are recorded into the database. To disable metadata recording, select the No value

No
Video streamMain stream

If the camera supports multistreaming,

 select

 select the stream for which detection is needed

Second stream
Other
EnableYesBy default, the detection tool is enabled. To disable, select
the 
the No
 value
 value
No
Name
Neurotracker
Neural trackerEnter the detection tool name or leave the default name
Decode key framesYes

By default, the Decode key frames parameter is disabled. Using this option reduces the load on the Server, but at the same time the quality of detection is reduced. To decode only the key frames, select the Yes value. We recommend enabling this parameter for "blind" (without video image display) Servers on which you want to perform detection.

For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.

Note
titleAttention!

The Number of frames processed per second and Decode key frames parameters are interconnected.

If there is no local Client connected to the Server, the following rules work for remote Clients:

  • If the key frame rate is less than the value specified in the Number of frames processed per second field, the detection tool will work by key frames.
  • If the key frame rate is greater than the value specified in the Number of frames processed per second field, the detection will be performed according to the set period.

If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again.

No
Decoder modeAuto

Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with

NVIDIA

Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding

CPU
GPU
HuaweiNPU
Neurofilter
Neural filter modeCPU
 

Select a processing resource for neural network operation (see Hardware requirements for neural analytics operation,

General information on configuring detection).

Selecting Nvidia GPU when configuring detectors)

Note
titleAttention!
  • We recommend using the GPU. It may take several minutes to launch the algorithm on
NVIDIA
Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel NCS (not supported)
Intel HDDL (not supported)
Intel GPU
Huawei NPU
Number of frames processed per second6

Specify the number of frames for the neural network to process per second. The higher the value, the more accurate the tracking, but the load on the CPU is also higher. The value must be in the

range

range [0.016

;

, 100].

Note
titleAttention!

We recommend the value of at least 6 FPS

or more is recommended

. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above

(see Examples of configuring Neurotracker for solving typical tasks)

.

Type
Neurotracker
Neural trackerName of the detection tool type (non-editable field)
Advanced settings
Camera position

Wall To
eliminate false positives
sort out false events from the detection tool when using a fisheye camera,
 
select the correct device location. For other devices, this parameter is irrelevant
Ceiling
Hide moving objects

Yes

By default, the parameter is disabled. If you don't need to detect moving objects, select the Yes value. An object is considered static if it doesn't change its position more than 10% of its width or height during its track lifetime

Note
titleAttention!

If a static object starts moving, the detection tool will create a track, and the object will no longer be considered static.

No
Hide static objects

Yes

By default, the parameter is disabled. If you don't need to detect static objects, select the Yes value. This parameter lowers the number false

positives

events from the detection tool when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.

Note
titleAttention!

If a static object starts moving, the detection tool will

trigger

create a track, and the object will no longer be considered static.

No
Minimum number of detection triggers6Specify the Minimum number of detection triggers for the
neurotracker
Neural tracker to display the object's track. The higher the value, the
more is
longer the time interval between the detection of an object and the display of its track on the screen. Low values of this parameter
may
can lead to false
positives
events from the detection tool. The value must be in the range [2, 100]
Model quantization

Yes
To quantize the network, select the Yes value. This parameter

By default, the parameter is disabled. The parameter is applicable only to standard neural networks for Nvidia GPU. It allows you to reduce the consumption of

the GPU processing power.

computation power. The neural network is selected automatically depending on the value selected in the Detection neural network parameter. To quantize the model, select the Yes value

Note
titleAttention!

AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%

.Model quantization is only applicable to NVIDIA GPUs

.

  • The first launch of a detection tool with the Model quantization parameter enabled
may
  • can take longer than a standard launch.
If 
 is
  •  is used, next time
a
  • the detection tool with quantization will run without delay.
No
Neural network file 

If you use a

unique

custom neural network, select the corresponding file

.

Note
titleAttention!
  • To train your neural network, contact AxxonSoft (
see 
  • see Data collection requirements for neural network training).
  • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
  • If the neural network file is not specified, the default file
will be
  • is used
, which
  • that is selected automatically depending on the selected
object type (Object type)
  • value in the Detection neural network parameter and the selected processor for the neural network operation
(
  • in the Decoder mode
)
  • parameter.
 If
  •  If you use a custom neural network, enter a path to the file. The selected
object type o
  • detection neural network is ignored when you use a custom neural network.
  • To ensure the correct operation of the neural network on Linux OS, the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. 
  • If you use standard neural network (training wasn't performed in operating conditions), we guarantee the overall accuracy of 80-95% and the percentage of false positives of 5-20%. The standard neural networks are located in the C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK directory.
Scanning window

Yes
T
By default, the parameter is disabled. To enable the scanning mode, select the Yes value (see
Scanning mode in Axxon One
Configuring the scanning mode)
No
Scanning window height0

The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels

Scanning window step height0

The scanning step determines the relative offset of the windows. If the step is equal to the height and width of the scanning window respectively, the segments will line up one after another. Reducing the height or width of the scanning step will increase the number of windows due to their overlapping each other with an offset. This will increase the detection accuracy, but will also increase the CPU load.

Note
titleAttention!

The height and width of the scanning step must not be greater than the height and width of the scanning window—the detection tool will not operate with such settings.

Scanning window step width0
Scanning window width0The height and width of the scanning window are determined according to the actual size of the frame and the required number of windows. For example, the real frame size is 1920×1080 pixels. To divide the frame into four equal windows, set the width of the scanning window to 960 pixels and the height to 540 pixels
Selected object
class
classes 

If necessary, specify the class of the detected object.

 If

 If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view),

 

10—Vehicle.

    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (
Object type
    1. Detection neural networkNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (
Object type
    1. Detection neural networkNeural network file).
    2. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (
Object type
    1. Detection neural networkNeural network file).
    2. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (

Object type
    1. Detection neural networkNeural network file)

.

    1. Info
      titleNote

      Starting

with 
    1. with Detector Pack

 3
    1.  3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (

Object type
    1. Detection neural network, Neural network file).

Similitude search

Yes

By default, the parameter is disabled. To enable the search for similar persons, select

the Yes 

the Yes value. If you enabled the parameter, it increases

the processor load

the processor load.

Note
titleAttention!
The 
 works

 works only on tracks of people.

No
Time of processing similitude track (sec)0

Specify the time in

the range [0; 3600] required

seconds for the algorithm to process the track to search for similar persons. The value must be in the range [0, 3600]

Time period of excluding static objects0Specify the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden. The value must be in the
range 
range [0
;
, 86 400]
Track retention time0.7Specify the time in seconds after which the object track is considered lost. This helps if objects in scene temporarily overlap each other. For example, when a larger vehicle
may
completely
block
blocks the smaller one from view. The value must be in the
range 
range [0.3, 1000]
Basic settings
Detection threshold30Specify the Detection threshold for objects in percent.
 If
If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the
accuracy
detection quality, but some
triggers
events from the detection tool may not be considered. The value must be in the range [0.05, 100]
Neurotracker
Neural tracker mode








CPU

Select the processor for the neural network operation (see Hardware requirements for neural analytics operation,

General information on configuring detection

Selecting Nvidia GPU when configuring detectors).

Note
titleAttention!
  • We recommend using the GPU. It may take several minutes to launch the algorithm on
NVIDIA
neurotracker
  • neural tracker is running on GPU, object tracks
may
  • can be lagging behind the objects in the
Surveillance
  • Surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (
see The
object
  • ).
  • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.









Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel
NCS 
NCS (not supported)
Intel HDDL (not supported)
Intel GPU
Huawei NPU
Object type
Detection neural network





Person

Select the

recognition object

detection neural network from the list. Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of object recognition


Person (top-down view)
Person (top-down view Nano)
Person (top-down view Medium)
Person (top-down view Large)
Vehicle
Person and vehicle (Nano)
—low accuracy, low processor load
Person and vehicle (Medium)
—medium accuracy, medium processor load
Person and vehicle (Large)
—high accuracy, high processor load
Neural network filter
Neurofilter
Neural filterYes
To use the neurofilter to sort out certain

By default, the parameter is disabled. To sort out parts of tracks, select the Yes value.

For example

, the neurotracker

:

Neural tracker detects all freight trucks, and the

neurofilter

Neural filter sorts out only the tracks that contain trucks with cargo door open

No
Neurofilter
Neural filter file 

Select a neural network file

Note
titleAttention!

Starting with Detector Pack 3.12, the neural network file of the Neural filter must match the processor type specified in the Neural tracker mode parameter.


By default, the entire frame is a detection area. If necessary, in the preview window, set detection areas with the help of anchor points points  (the same as with the excluded areas of the Scene analytics detection tools, see Setting General Zones for Scene analytics detection tools). By default, the whole frame is see Configuring a detection area).

Info
titleNote

For convenience of configuration, you can "freeze" the frame. Click the Image Modified button. To cancel the action, click this button again.

The detection area is displayed by default. To hide it, click the Image Modified button. To cancel the action, click this button again.

To save the parameters of the detection tool, click the Apply Image Modified button. To cancel the changes, click the Cancel Image Modified button.

The next step is to If necessary, you can create and configure the necessary detection tools sub-detectors on the basis of neurotracker. The configuration procedure is the same as for the basic Neural tracker (see Setting up Tracker-based Scene Analytics detection tools Standard sub-detectors).

Note
titleAttention!

To

trigger

get an event from the Motion in

Area detection tool

area sub-detector on the basis of

neurotracker

Neural tracker, an object must be displaced by at least 25% of its width or height in

FOV.The abandoned objects detection tool works only with the basic object tracker.

the frame.

Example of configuring Neural tracker for solving typical tasks

ParameterTask: detection of moving peopleTask: detection of moving vehicles
Other
Number of frames processed per second612
Neural network filter
Neural filterNoNo
Basic settings
Detection threshold3030
Advanced settings
Minimum number of detection triggers66
Camera positionWallWall
Hide static objectsYesYes
Neural network file

Path to the *.ann neural network file. You can also select the value in the Detection neural network parameter. In this case, this field must be left blank

Path to the *.ann neural network file. You can also select the value in the  Detection neural network parameter. In this case, this field must be left blank