Documentation for Axxon One 2.0. Documentation for other versions of Axxon One is available too.

Previous page Data collection requirements for neural network training  Optimizing the operation of neural analytics on GPU in Linux OS Next page

On the page:


General information

It can take several minutes to launch neural analytics algorithms on NVIDIA GPU after server restart. At this time, the neural models are optimized for the current GPU type.

You can use the caching function to ensure that this operation is performed only once. Caching saves the optimization results on the hard drive and uses it for the subsequent analytics runs. 

Starting with DetectorPack 3.9, a utility was added to the Neuro Pack add-ons (see Installing DetectorPack add-ons), which allows you to create GPU neural network caches without using Axxon One. The presence of the cache speeds up the initialization and optimizes video memory consumption.

Optimizing the operation of neural analytics on GPU

To optimize the operation of the neural analytics on GPU, do the following:

  1. Stop the server (see Stopping the server).

    Attention!

    If the system has the software running on GPU, it is necessary to stop its operation.

  2. Create the GPU_CACHE_DIR system variable (see Appendix 9. Creating system variable) by specifying in the Variable value field the path to the cache location with an arbitrary folder name. For example, D:\GPU_cache. The specified directory will store the cache for all used detectors and neural networks.
    The cache size depends on the number of neural networks used and their type. The minimum size is 70 MB.

  3. Run the command prompt as administrator.
  4. To call the utility, in the command prompt, enter C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe.

  5. Press Enter.

  6. Specify the ID of the required Nvidia GPU (see Selecting Nvidia GPU when configuring detectors).
  7. Press Enter.

Optimizing the operation of the neural analytics on GPU is complete. The utility will create the caches of four neural networks included in the Neuro Pack add-ons:

  • GeneralNMHuman_v1.0GPU_onnx.ann—human;
  • smokeScanned_v1_onnx.ann—smoke detection;
  • fireScanned_v1_onnx.ann—fire detection;
  • reid_15_0_256__osnetfpn_segmentation_noise_20_common_29_onnx.ann—search for the similar in the Neural tracker (see Similitude search).

Attention!

The cache must be recreated in the following cases:

Creating GPU neural network caches using parameters

  1. -p is a parameter to create a cache for a particular neural network.
    Command example:

    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann"

    To create a cache for multiple neural networks, list the paths to the selected neural networks, separated by a space.
    Command example:

    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann" "C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK\WaterLevelRuleNet_origin_onnx.ann"
  2. -v is a parameter to output the procedure log to the console during cache generation.
    Command example to automatically create caches of four neural networks included in the Neuro Pack add-ons with log output:

    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -v

    Command example:

    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann" -v
  3. --int8=1 is a parameter to create cache for those neural networks for which quantization is available. Neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file. By default, the --int8=0 parameter is disabled.
    Command example:

    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe --int8=1

The neural networks for which the quantization mode is available (see Neural trackerStopped object detector, Neural counter):

  • GeneralNMCar_v1.0GPU_onnx.ann—Vehicle.
  • GeneralNMHuman_v1.0GPU_onnx.ann—Person.
  • GeneralNMHumanTopView_v0.8GPU_onnx.ann—Person (top-down view).

Starting with DetectorPack 3.11, the following neural networks were added:

  • GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann—Person and vehicle (Nano).
  • GeneralNMHumanAndVehicle_Medium_v1.0_GPU_onnx.ann—Person and vehicle (Medium).
  • GeneralNMHumanAndVehicle_Large_v1.0_GPU_onnx.ann—Person and vehicle (Large).

Starting with DetectorPack 3.12, the following neural networks were added:

  • GeneralNMHumanTopView_Nano_v1.0_GPU_onnx.ann—Person (top-down view Nano).
  • GeneralNMHumanTopView_Medium_v1.0_GPU_onnx.ann—Person (top-down view Medium).
  • GeneralNMHumanTopView_Large_v1.0_GPU_onnx.ann—Person (top-down view Large).


  • No labels