Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width50%
Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:

Table of Contents

Column

General information

It can take several minutes to launch neural analytics algorithms on NVIDIA GPU after server restart. At this time, the neural models are optimized for the current GPU type.

You can use the caching function to ensure that this operation is performed only once. Caching saves the optimization results on the hard drive and uses it for the subsequent analytics runs. 

Starting from Starting with DetectorPack 3.9, a utility was added to the Neuro Pack add-ons (see see Installing DetectorPack add-ons), which allows you to create GPU neural network caches without using using Axxon One. The presence of the cache speeds up the initialization and optimizes video memory consumption.

Optimizing the operation of neural analytics on GPU

To optimize the operation of the neural analytics on GPU, do the following:

  1. Stop the

  2. Server
  3. server (see

  4. Shutting down a Server
  5. Stopping the server).

    Note
    titleAttention!

    If the system has the software running on GPU, it is necessary to stop its operation.

  6. Create the GPU_CACHE_DIR system variable (

  7. see 
  8. see Appendix

  9. 10
  10. 9. Creating system variable) by specifying in

  11. the 
  12. the Variable value

  13.  
  14. field the path to the cache location with an arbitrary folder name. For example, D:\

  15. AN_
  16. GPU_cache. The specified directory will store the cache for all used

  17. detection tools
  18. detectors and neural networks.
    The cache size depends on the number of neural networks used and their type. The minimum size is 70 MB.

  19. Run the command prompt as administrator.
  20. To call the utility,

  21. open
  22. in the command

  23. line: C
  24. prompt, enter C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe

  25. and press
  26. .

  27. Press Enter.

  28. Specify the ID of the required
  29. GPU (see General Information on Configuring Detection) and press 
  30. Nvidia GPU (see Selecting Nvidia GPU when configuring detectors).
  31. Press Enter.

Optimizing the operation of the neural analytics on analytics on GPU is complete. The utility will create the caches of four neural networks included in the Neuro Pack add-ons:

  • GeneralNMHuman_v1.0GPU_onnx.ann—human;
  • smokeScanned_v1_onnx
  • .ann—smoke
  • .ann (or bestSmoke_v1.ann starting with Detector Pack 3.14)—smoke detection;
  • fireScanned_v1_onnx.
  • ann—fire
  • ann (or bestFire_v1.ann starting with Detector Pack 3.14) —fire detection;
  • reid_15_0_256__osnetfpn_segmentation_noise_20_common_29_onnx.
  • ann—search
  • ann—search for the similar in the Neural
  • Tracker
  • tracker (see
  • Image Search
  • Similitude search).
  • Image Removed
Note
titleAttention!

The cache must be recreated in the following cases:

  • if you update the Neuro Pack add-ons (
  • see 
  • see Installing DetectorPack add-ons),
  • if you change the
  • NVIDIA
  • Nvidia GPU model,
  • if you update the
  • NVIDIA
  • Nvidia GPU drivers.

Creating GPU neural network caches using parameters

...

  1. -p is a parameter to create a cache for a particular neural network.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\
  2. <Network
  3. <Neural network location directory>\
  4. Network
  5. Neural_network_name.ann"

    To create a cache for multiple neural networks, list the paths to the selected neural networks, separated by a space.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\
  6. <Network
  7. <Neural network location directory>\
  8. Network
  9. Neural_network_name.ann" "C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK\WaterLevelRuleNet_origin_onnx.ann"
  10. -v is a parameter to output the procedure log to the console during cache generation.
    Command example to automatically create caches of four neural networks included in the Neuro Pack add-ons with log output:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -v

    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\
  11. <Network
  12. <Neural network location directory>\
  13. Network
  14. Neural_network_name.ann" -v
  15. --int8=1 is a parameter to create

  16. a quantized version of the
  17. cache for those neural networks for which quantization is available. Neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file. By default,

  18. the
  19. the --int8=0 parameter is disabled.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe --int8=1
  20. Note
    titleAttention!

The neural networks for which the quantization mode is available

...

(see Neural trackerStopped object detector, Neural counter):

  • GeneralNMCar_v1.0GPU_onnx.ann—Vehicle.
  • GeneralNMHuman_v1.0GPU_onnx.ann—Person.
  • GeneralNMHumanTopView_v0.8GPU_onnx.ann—Person (top-down view).

Starting with DetectorPack 3.11, the following neural networks were added:

  • GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann—Person and vehicle (Nano).
  • GeneralNMHumanAndVehicle_Medium_v1.0_GPU_onnx.ann—Person and vehicle (Medium).
  • GeneralNMHumanAndVehicle_Large_v1.0_GPU_onnx.ann—Person and vehicle (Large).

Starting with DetectorPack 3.12, the following neural networks were added:

...

The networks for which the quantization mode is available:

...

  • GeneralNMHumanTopView_Nano_v1.0_GPU_onnx.ann—Person (top-down view Nano).
  • GeneralNMHumanTopView_Medium_v1.0_GPU_onnx.ann—Person (top-down view Medium).
  • GeneralNMHumanTopView_Large_v1.0_GPU_onnx.ann—Person (top-down view Large)
  • .