Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.

Previous page Next page

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

To configure Stopped object detector, do the following:

  1. Go to the Detection Tools tab.
  2. Below the required camera, click Create…  Category: Trackers  Stopped object detector.

By default, the detection tool is enabled and set to detect stopped objects.

If necessary, you can change the detection tool parameters. The list of parameters is given in the table:

ParameterValueDescription
Object features
Record objects trackingYes

By default, metadata are recorded into the database. To disable metadata recording, select the No value

No
Video streamMain stream

If the camera supports multistreaming, select the stream for which detection is needed

Second stream
Other
EnableYesBy default, the detection tool is enabled. To disable, select the No value
No
NameStopped object detectorEnter the detection tool name or leave the default name
Decoder modeAuto

Select a processing resource for decoding video streams. When you select a GPU, a stand-alone graphics card takes priority (when decoding with Nvidia NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding

CPU
GPU
HuaweiNPU
TypeStopped object detectorName of the detection tool type (non-editable field)
Basic settings
Detection threshold30Specify the Detection threshold for objects in percent. If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some events from the detection tool may not be considered. The value must be in the range [0.05, 100]
Neurotracker mode








CPU

Select the processor for the neural network operation—CPU, one of Nvidia GPUs, or one of Intel GPUs (see Hardware requirements for neural analytics operation, Selecting Nvidia GPU when configuring detection tools).

Attention!

  • It may take several minutes to launch the algorithm on Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
  • Starting with Detector Pack 3.11, Intel HDDL is not supported.









Nvidia GPU 0
Nvidia GPU 1
Nvidia GPU 2
Nvidia GPU 3
Intel GPU
Intel HDDL (not supported)
Huawei NPU
Object type





Person

Select the recognition object.

  • Nano—low accuracy, low processor load.
  • Mediummedium accuracy, medium processor load.
  • Largehigh accuracy, high processor load
Person (top view)
Person (top view Nano)
Person (top view Medium)
Person (top view Large)
Vehicle
Person and vehicle (Nano)
Person and vehicle (Medium)
Person and vehicle (Large)
Advanced settings
Wait time3

Specify the waiting time for the reappearance of a disappeared stopped object in seconds. The value must be in the range [1, 60]

Stop time5

Specify the time in seconds after which the object will be considered stopped. The value must be in the range [1, 60]

Selected object class 

If necessary, specify the class of the detected object. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 110.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.

    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (Object typeNeural network file).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (Object typeNeural network file).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (Object typeNeural network file)

      Note

      Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (Object type, Neural network file).

Camera position

Wall

If you use a fish-eye camera, then select the correct location of the device to filter out false events. This parameter is not relevant for other devices

Ceiling
Neural network file 

If you use a custom neural network, select the corresponding file.

Attention!

  • To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
  • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).
  • If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (Object type) and the selected processor for the neural network operation (Decoder mode). If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
  • To ensure the correct operation of the neural network on Linux OS, the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. 


By default, the entire frame is a detection area. If necessary, in the preview window, set detection areas with the help of anchor points  (see Configuring the Detection Zone).

Note

For convenience of configuration, you can "freeze" the frame. Click the button. To cancel the action, click this button again.

The detection area is displayed by default. To hide it, click the button. To cancel the action, click this button again.

To save the parameters of the detection tool, click the Apply  button. To cancel the changes, click the Cancel  button.

The Stopped object detector is configured.

  • No labels