| Section |
|---|
| Column |
|---|
|
| Panel |
|---|
| borderColor | #CCCCCC |
|---|
| bgColor | #FFFFFF |
|---|
| titleBGColor | #F0F0F0 |
|---|
| borderStyle | solid |
|---|
| title | On the page: |
|---|
| |
|
|
The Neurotracker module program module registers object tracks in the camera FOV during recording using a the neural network and saves them to the VMDA metadata storage (see Creating and configuring VMDA metadata storage).
Configuration The configuration of the Neurotracker program module includes: main and additional settings of the detection tooldetector, selection of the area the area of interest, configuration of the neurofilter configuration.
You can configure the Neurotracker program module on the settings panel of the the Neurotracker object that is created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

Main settings of the
...
detector
You can configure the main settings of the detection tool detector on the Main settings tab on the settings panel of the Neurotracker object.

Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.
| Info |
|---|
|
The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They |
are not aren't displayed in the Event viewer. |
- Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.
does not doesn't affect the VMDA search and is used just for the visualization. For this parameter, |
the the titles database is used. |
- Set the Model quantization checkbox to enable model quantization. By default, the checkbox is
clear- cleared. This parameter allows you to reduce the consumption of the GPU processing power.
| Info |
|---|
|
- AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
- Model quantization is only applicable for NVIDIA GPUs.
- The first launch of
|
a detection tool - the detector with quantization enabled
|
may a - the standard launch.
- If GPU caching is used, the next time
|
a detection tool - the detector with quantization will run without delay.
|
- From the Object type drop-down list, select the object type for analysis:
—the camera directed a - the person at the angle of 100-160°;
- Human (top-down view)
—the camera is directed at a - —camera is pointed at the person from above at a slight angle;
- People view from above (Nano)—camera is pointed at the person from above at a slight angle, small network size;
- People view from above (Medium)—camera is pointed at the person from above at a slight angle, average network size;
- People view from above (Large)—camera is pointed at the person from above at a slight angle, large network size;
- Vehicle
—the camera directed a - the vehicle at the angle of 100-160°;
- Person and vehicle (Nano)
—person - —detects person and vehicle
recognition neural - network size;
- Person and vehicle (Medium)
—person - —detects person and vehicle
recognition medium neural - average network size;
- Person and vehicle (Large)
—person - —detects person and vehicle
recognition neural - network size.
| Info |
|---|
|
Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of the object recognition but the greater the load on the CPU. |
- By default, the standard
(default) - neural network is initialized according to the selected object
selected in the Object type drop-down list and the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the
Image Removed - type on step 5 and device on step 7. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the
Image Added button to the right of the Tracking model field - and specify its file in the standard Windows Explorer window
, specify the path to the file- that opens.
a the neural network, contact |
the see A The use of the trained neural network |
trained specific particular scene allows you to detect only objects of a certain type |
only (for example, a person, a cyclist, a motorcyclist, and so on). |
- From
the - the Device drop-down list, select the
device - one on which
the neural - the neural network
- will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)
—the device - —device is selected automatically:
- The NVIDIA
- GPU
gets - takes the highest priority,
followed by - then goes the Intel GPU, and then the CPU.
| Note |
|---|
|
- We recommend using the GPU.
- It
|
may - can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
- In the Detector Pack 2.0subsystem, the Intel HDDL support is removed. Thus, when you update from the 1.0 version, the Not supported option is automatically selected instead of this device option, and detectors won't operate. To resume detector operation, select the required device from the list.
|
- From the Process drop-down list, select which objects must be processed by the neural network:
- All objects—moving and stationary objects;
- Only moving objects—an object is considered to be moving if, during the entire lifetime of its track, it
has - shifted by more than 10% of its width or height.
Using - If you use this parameter, you can reduce the number of false positives;
- Only stationary objects—an object is considered stationary if, during the entire lifetime of its track, it
has - shifted by no more than 10% of its width or height. If
a - the stationary object starts moving, the
detection tool triggers - detector generates an event, and the object is no longer considered stationary.
| Info |
|---|
|
The selection of only moving objects and only stationary objects isn't mutually exclusive, as some tracks cannot be determined as either moving or stationary. First, the neural network detects all objects, and after that, the detector filters out unnecessary tracks in accordance with the selected value of the Process setting. |
- From the Camera position drop-down list, select:
- Wall—objects are detected only if their lower part gets into the area of interest specified in the
detection tool - detector settings.
- Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the
detection tool - detector settings.
Selecting the area of interest
- Click the Settings button.
The Detection - As a result, the detector settings window opens.
Image Modified Click - In the Detection settings window, click the Stop video button (1) to pause the playback and capture the frame of the video image.
- Click the Area of interest button (2) to specify the area of interest. The button
will be - is highlighted in blue.
Image Modified
- On the captured frame of the video image, use the mouse to sequentially set the anchor points of the area (1)
, - in which the objects
will be - are detected. The rest of the frame
will be - is faded.
You - There can
add - be only one area of interest. To delete an area, click the
Image Modified button. If you don't specify the area of interest, the entire frame is analyzed. - Click the OK button (2) to close the Detection settings window and return to the settings panel of the
Neurotracker object- detector.
Additional settings
- Go to the Additional settings tab on the settings panel of
the Neurotracker object- the neurotracker.
Image Modified
In the Recognition threshold [0,100] field, specify the neurocounter sensitivity—an integer
value number in the range from 0 to 100.
- In the Frames processed per second [0.016, 100] field, specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames the interpolation
will be - is performed—finding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the
detection tool operation- tracking, but the higher the load on the processor.
- In the Minimum number of triggering [2, 100] field, specify the minimum number of the neurotracker
triggers required - triggerings to display the object track. The higher the value of this parameter, the longer it takes from the object detection moment to the display of its track.
A - The low value of this parameter can lead to false positives. The default value is 6. The value range is from 2
-- to 100. The entered
value - number that is greater than the maximum value or less than the minimum value from the specified range
, - is automatically adjusted to the maximum or minimum value, respectively.
In the Track hold time (s) field, specify the time in seconds after which the object track is considered lost in the range from 0.3 to 1000. This parameter is useful in situations
where when one object in the frame temporarily overlaps another. For example, when a large vehicle completely overlaps a small one.
| Info |
|---|
|
If an object (track) is close to the frame boundary, then approximately half of the time specified in |
the the Track hold time (s) field must elapse from the moment the object disappears from the frame until its track is deleted. |
- Set
the - the Scanning mode
- checkbox to detect small objects. If you enable this mode, the load on the system increases.
So - That is why, on step 3, we recommend specifying a small number of frames processed per second
in the Frames processed per second [0.016, 100] field- . By default, the checkbox is
clear- cleared. For more information on the scanning mode, see Configuring the Scanning mode.
- If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
| Info |
|---|
|
- If you leave the field blank, the tracks of all available classes from the neural network
|
will be - type, Neural network file).
- If you specify a class/classes from the neural network, the tracks of the specified class/classes
|
will be - type, Neural network file).
- If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network
|
will be - type, Neural network file).
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network
|
will be type, Neural network file).
|
Neurofilter
You can use the neurofilter to sort out some of the tracks. For example, the neurotracker detects all freight trucks, and the neurofilter leaves only those tracks that correspond to trucks with cargo door doors open. To configure a the neurofilter, do the following:
- Go to the Neurofilter tab on the settings panel of
the Neurotracker object- the neurotracker.
Image Modified
- Set the Enable filtering checkbox to enable the neurofilter. By default, the checkbox is
clear- cleared.
- By default, the standard
(default) - neural network
is initialized - according to
the device selected in the Device drop-down list. The standard neural - the selected device on step 4 is initialized. You must not select manually standard networks for different processor
types are selected - types since it is performed automatically. If you
use a custom - have the unique neural network for use, click
the
Image Removed button - the
Image Added button to the right of the - the Tracking model
field and - field and specify its file in the standard Windows Explorer window
, specify the path to the file- that opens.
a contact the see A The use of the trained neural network |
trained specific particular scene allows you to detect only objects of a certain type |
only (for example, a person, a cyclist, a motorcyclist, and so on). |
- From the Device drop-down list, select the
device - one on which
the neural - the neural network
- will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs.
Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.
noteAttention! |
- The device for the neurofilter must match the device specified for the neurotracker in
|
the Device drop-down - step 7 of the main settings.
|
may - can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings.
|
Click the Apply button to save the changes.
| Info |
|---|
|
If necessary, create and configure the |
NeuroTracker detection tools detectors on the basis of the Neurotracker object. The procedure of creating and configuring the |
NeuroTracker detection tools detectors is similar to creating and configuring the VMDA |
detection tools a the regular tracker. The only difference is that |
it is necessary to NeuroTracker detection tools detectors on the basis of |
the , if when you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA |
detection tools are triggereddetectors generate an event, is configured using the LongInZoneTimeout2 registry key, not the LongInZoneTimeout. The |
procedure of configuring the alarm generation mode is set for any type of VMDA |
detection tools is detector similar to the VMDA |
detection tools a the regular tracker using the VMDA.oneAlarmPerTrack registry |
key
Configuration of the Configuring the Neurotracker program module is complete.
| Tip |
|---|
If events are periodically received from several objects, then for convenience, you can create and configure we recommend creating and configuring the neurotracker track counters (see Configuring the neurotracker track counter). |