Go to documentation repository
Page History
Tip |
---|
Video requirements for scene analytics detection tools Video stream and scene requirements for neural tracker operation |
To configure the neural tracker-based Scene Analytics detection tools, do the following:
- Select the Neurotracker object.
- By default, metadata are recorded into the database. To disable metadata recording, select No (1) from the Record object tracking list.
- If a the camera supports multistreaming, select the stream to apply the detection tool to on which you want to perform detection (2).
- To reduce the number of false alarms rate positives from a fish-eye camera, you have to position it properly , in the Camera position field, select the correct position of the device (3). For other devices, this parameter is not valid.
- Select the processor for decoding video streams (4). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding (see General Information on Configuring Detection).
- Set the Detection Set the recognition threshold for objects in percent (45). If the recognition detection probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity., but some triggers may not be considered.
In the Frames processed per second field, set Set the frame rate value for the neural network to process (56). The other frames will be interpolated. The higher the value, the more accurate the tracking, but the higher the load on the CPU load.
Note title Attention! At least 6 FPS or more is recommended. For the fast moving objects (running individuals, vehicles), you must should set the frame rate at 12 FPS or above .
If you don't need to detect moving objects, select Yes in the Hide moving objects field (6). An object is treated as static if it does not change its position more than at 10% of its width or height during its track's lifetime.
If you don't need to detect static objects, select Yes in the Hide stationary objects field (7). This parameter lowers the false alarm rate when detecting moving objects.
- Specify the Minimum number of detection triggers for the neural tracker to display the object's trajectory (8). The higher the value, the more is the time interval between the object's detection and display of its trajectory on screen. Low values may lead to false triggering.
(see Examples of configuring neural tracker for solving typical tasks).
- Specify the Minimum number of detection triggers for the neural tracker to display the object track (7). The higher the value of this parameter, the longer it takes from detecting an object to displaying its track. Low value of this parameter may lead to false triggering.
You can use the neural filter to sort out video recordings featuring selected objects and their tracks. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:
to use the neural filter, set Yes in the corresponding field (9).
- in the Neurofilter file field, select a neural network file (10).
in the Neurofilter mode field, select a processor to be used for neural network operation (11).
In the Neurotracker mode field, select the processor for the neural network operation: the CPU, one of GPUs or one of Intel processors (12, see Hardware requirements for neural analytics operation, General Information on Configuring Detection).
Note title Attention! We recommend using the GPU.
It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Configuring the acceleration of GPU-based neuroanalytics).
If Neural Tracker is running on GPU, object tracks may be lagging behind the objects. If this happens, set the camera buffer size to 1000 milliseconds (see The Video Camera Object).
In the Object type field (13), select the recognition object type, or in the Neural network file field (8), select the neural network fileSelect the neural network file (9).
Note title Attention! To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
A trained neural network does a great job for a particular scene if allows you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.).
To train your If the neural network , contact AxxonSoft (see Requirements to data collection for neural network trainingfile is not specified, the default file will be used, which is selected depending on the selected object type (13) and the selected processor for the neural network operation (4).
Info title Note For the correct neural network operation under on Linux OS, place the corresponding file in the /opt/AxxonSoft/AxxonNextDetectorPack/NeuroSDK directory.
Select the processor for the neural network: the CPU or one of GPUs (13).
Note title Attention! We recommend the GPU.
It may take several minutes to launch the algorithm on an NVIDIA GPU after you apply the settings.
- If you don't need to detect moving objects, select Yes in the Hide moving objects field (14). An object is treated as static if it does not change its position more than 10% of its width or height during its track lifetime.
If you don't need to detect static objects, select Yes in the Hide stationary objects field (15). This parameter lowers the number of false positives when detecting moving objects.
In the Track retention time field, set a time interval in seconds after which the tracking of a vehicle is considered lost (16). This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block a smaller one from the view.
You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:
To use the neural filter, set Yes in the corresponding field (10).
In the Neurolfilter mode field, select a processor to be used for neural network computations (11).
In the Path to neurofilter file field, select a neural network file (12).By default, the entire FoV FOV is a detection zonearea. If you need to narrow down the analysis area to be analyzed, you can , in the preview window set one or several detection zonesmore areas in which you want to perform the analysis.
Info title Note The procedure of setting zones areas is identical to the primary base tracker's (see Setting General Zones for Scene Analyticsanalytics detection tools). The only difference is that the neural tracker 's zones areas are processed while the primary base tracker 's areas are ignored.
In the Track retention time field, set a time interval in seconds after which the tracking of a vehicle is considered lost (14). This helps if objects in scene temporarily obscure each other. For example, a larger vehicle may completely block the smaller one from view.
- Click ApplyClick the Apply button.
The next step is to create and configure the necessary detection tools on the basis of neural tracker. The configuration procedure is the same as for the primary trackerbase tracker (see Setting up Tracker-based Scene Analytics detection tools).
Note title Attention! To trigger a Motion in Area detection tool under a neural network tracker, an object must should be displaced by at least 25% of its width or height in FoVFOV.
Note title Attention! The abandoned objects detection tool works only with the primary base object tracker.