Go to documentation repository
To configure the Scene Analytics detection tools based on Neurotracker, do the following:
The Decode key frames parameter (3) is enabled by default. In this case, only key frames are decoded. To disable decoding, select No in the corresponding field. Using this option reduces the load on the Server, but at the same time the quality of detection is naturally reduced. We recommend enabling this parameter for "blind" (without video image display) Servers, on which you want to perform detection. For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.
Attention!
The Number of frames processed per second and Decode key frames parameters are interconnected.
If there is no local Client connected to the Server, the following rules work for remote Clients:
If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again.
You can use the neurofilter to sort out certain tracks. For example, the neurotracker detects all freight trucks, and the neurofilter sorts out only video recordings that contain trucks with cargo door open. To set up a neurofilter, do the following:
to use the neurofilter, select Yes in the corresponding field (7).
in the Neurofilter mode field (5), select a processor to be used for neural network work (see 검출 구성에 대한 일반 정보).
In the Number of frames processed field (6), specify the number of frames for the neural network to process per second. The higher the value, the more accurate tracking, but the load on the CPU is also higher.
Attention!
6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you must set the frame rate at 12 FPS or above (see 전형적인 작업 해결을 위한 Neurotracker 구성 예시).
In the Neurotracker mode field (10), select the processor for the neural network—CPU, one of NVIDIA GPUs, or one of Intel GPUs (see 신경망 분석 작업을 위한 하드웨어 요구 사항, 검출 구성에 대한 일반 정보).
Attention!
In the Object type field (11), select the recognition object:
Human and Vehicle (Large)—high accuracy, high processor load.
To eliminate false positives when using a fisheye camera, in the Camera position field (12), select the correct device location. For other devices, this parameter is irrelevant.
If you don't need to detect static objects, select Yes in the Hide static objects field (14). This parameter lowers the number false positives when detecting moving objects. An object is considered static if it has not moved more than 10% of its width or height during the whole time of its track existence.
Attention!
If a static object starts moving, the detection tool will trigger, and the object will no longer be considered static.
If necessary, enable the Model quantization parameter (16). It allows you to reduce the consumption of the GPU processing power.
Attention!
AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
Model quantization is only applicable to NVIDIA GPUs.
The first launch of a detection tool with quantization enabled may take longer than a standard launch.
If GPU caching is used, next time a detection tool with quantization will run without delay.
If you use a unique neural network, select the corresponding file (17).
Attention!
If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (11, 17).
Note
Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (11, 17).
To enable the search for similar persons, in the Similitude search field (19), select Yes. If you enabled the parameter, it increases the processor load.
Attention!
The Similitude search works only on tracks of people.
By default, the entire FOV is a detection area. If you need to narrow down the area to be analyzed, you can set one or several detection areas in the preview window.
Note
The procedure of setting areas is identical to the basic tracker's (see 장면 분석 검출 도구를 위한 일반 영역 설정). The only difference is that the neurotracker areas are processed while the basic tracker areas are ignored.
The next step is to create and configure the necessary detection tools on the basis of neurotracker. The configuration procedure is the same as for the basic tracker (see 추적기 기반 장면 분석 검출 도구 설정).
Attention!