Go to documentation repository
Page History
Section | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
General information
It can take several minutes to launch neural analytics algorithms on NVIDIA Nvidia GPU after Server server restart. Meanwhile, the neural models are optimized for the current GPU type.
You can use the caching function to ensure that this operation is performed only once. Caching saves the optimization results on the hard drive and uses it for the subsequent analytics runs.
Starting with DetectorPack 3.119, a utility was added to the Neuro Pack add-ons (see Installing DetectorPack add-ons), which allows you to create GPU neural network caches without using using Axxon One. The presence of the cache speeds up the initialization and optimizes video memory consumption.
Optimizing the operation of neural analytics on GPU
To optimize the operation of the neural analytics on GPU, do the following:
Stop the
Server server (see Starting and stopping the Axxon One Server in Linux OS).
Note title Attention! If the system has the software running on GPU, it is necessary to stop its operation.
Login as
root ngp superuser:
Execute In the
following command
in prompt, run the
terminalcommand:
Code Block language bash sudo su
-ingp
Enter the password for the root superuser.
Create a folder with a custom name to store the cache. For example:
Code Block language bash mkdir /opt/AxxonSoft/AxxonOne/gpucache
Change folder permissions:
Code Block language bash chmod -R 777 /opt/AxxonSoft/AxxonOne/gpucache
Create the GPU_CACHE
system _DIR system variable:
Go to the /opt/AxxonSoft/AxxonOne/ folder:
Code Block language bash cd /opt/AxxonSoft/AxxonOne
Open the instance.conf file for editing:
Code Block language bash nano instance.conf
Add the following line to the file:
Code Block language bash export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache"
Note title Attention! If you change the server configuration (see Changing the configuration of the Axxon One Server in Linux OS) or update to a new version of Axxon One, the system variables previously added to the instance.conf file will be deleted (see Creating system variables in Linux OS).
Save the file using the Ctrl+O keyboard shortcut.
Exit file editing mode using the Ctrl+X keyboard shortcut.
Execute Run the following command in the terminal:
Code Block language bash export GPU_CACHE_DIR="/opt/AxxonSoft/AxxonOne/gpucache"
Go to the /opt/AxxonSoft/DetectorPack/ folder:
Code Block language bash cd /opt/AxxonSoft/DetectorPack
Execute Run the following command:
Code Block language bash ./NeuroPackGpuCacheGenerator
Note title Attention! If more than one
NVIDIA Nvidia GPU is available, you will be able to select one. To do this, specify a number from 0 to 3 which corresponds to the required device in the list.
Optimizing the operation of the neural analytics on analytics on GPU is complete. The utility will create the caches of four neural networks included in the Neuro Pack add-ons:
- GeneralNMHuman_v1.0GPU_onnx.ann —human
- —person;
- smokeScanned_v1_onnx .ann—smoke
- .ann (or bestSmoke_v1.ann starting with Detector Pack 3.14) —smoke detection;
- fireScanned_v1_onnx. ann—fire
- ann (or bestFire_v1.ann starting with Detector Pack 3.14) —fire detection;
- reid_15_0_256__osnetfpn_segmentation_noise_20_common_29_onnx. ann—search
- ann—search for the similar in the
- the Neural Tracker
- tracker (see Image
- Similitude search).
Creating GPU neural network caches using parameters
...
-p is a parameter to create a cache for a particular neural network.
Command example:Code Block ./NeuroPackGpuCacheGenerator -p /opt/AxxonSoft/DetectorPack/NeuroSDK/GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann
-v is a parameter to output the procedure log to the console during cache generation.
Command example to automatically create caches of four neural networks included in the Neuro Pack add-ons with log output:Code Block ./NeuroPackGpuCacheGenerator -v
--int8=1 is a parameter to create a quantized version of the cache for those neural networks for which quantization is available. By default, the --int8=0 parameter is disabled.
Command example:Code Block ./NeuroPackGpuCacheGenerator -p /opt/AxxonSoft/DetectorPack/NeuroSDK/GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann --int8=1
Note title Attention! The neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file.
The neural networks for which the quantization mode is available (
...
see Neural tracker, Stopped object detector, Neural counter):
- GeneralNMCar_v1.0GPU_onnx.ann
...
- —Vehicle.
- GeneralNMHuman_v1.0GPU_onnx.ann
...
- —Person.
- GeneralNMHumanTopView_v0.8GPU_onnx.
...
- ann—Person (top-down view).
Starting with DetectorPack 3.11, the following neural networks were added:
- GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann
...
- —Person and vehicle (Nano).
- GeneralNMHumanAndVehicle_Medium_v1.0_GPU_onnx.ann
...
- —Person and vehicle (Medium).
- GeneralNMHumanAndVehicle_Large_v1.0_GPU_onnx.ann
...
- —Person and vehicle (Large).
Starting with DetectorPack 3.12, the following neural networks were added:
- GeneralNMHumanTopView_Nano_v1.0_GPU_onnx.ann—Person (top-down view Nano).
- GeneralNMHumanTopView_Medium_v1.0_GPU_onnx.ann—Person (top-down view Medium).
- GeneralNMHumanTopView_Large_v1.0_GPU_onnx.ann—Person (top-down view Large).