Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section
Column
width50%
Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:

Table of Contents

Column

General information

It can It may take several minutes to launch neuroanalytics neural analytics algorithms on NVIDIA GPU after Server server restart. MeanwhileAt this time, the neuromodels neural models are optimized for the current GPU type.

You can use the caching function to ensure that this operation is performed only once. Caching saves the optimization results on the hard drive and uses it for the subsequent analytics runs. 

Starting with DetectorPack 3.9, a utility was added to the Neuro Pack add-ons (see see Installing DetectorPack add-ons), which allows you to create GPU neural network caches without using using Axxon One. The presence of the cache speeds up the initialization and optimizes video memory consumption.

Optimizing the operation of neural analytics on GPU

To optimize the operation of the neural analytics on GPU, do the following:

  1. Stop the

  2. Server
  3. server (see

  4. Shutting down a Server
  5. Stopping the server).

    Note
    titleAttention!

    If the system has the software running on GPU, it is necessary to stop its operation.

  6. Create the GPU_CACHE_DIR system variable (

  7. see 
  8. see Appendix

  9. 10
  10. 9. Creating system variable) by specifying in

  11. the 
  12. the Variable value

  13.  
  14. field the path to the cache location with an arbitrary folder name. For example, D:\GPU_cache. The specified directory will store the cache for all used

  15. detection tools
  16. detectors and neural networks.
    The cache size depends on the number of neural networks used and their type

  17. , the
  18. . The minimum size is 70 MB.

  19. Note
    titleAttention!
    This function works in beta mode for all detection tools which use neuroanalytics (see General information on Neural Analytics), except Face detection. To optimize Face detection operation using the GPU_CACHE_DIR system variable, you need to perform additional actions (see Optimizing the operation of Face detection on GPU). 
  20. Run the command prompt as administrator.
  21. To call the utility,

  22. open
  23. in the command

  24. line
  25. prompt, enter C:

  26. C:
  27. \Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe

  28. and press
  29. .

  30. Press Enter.

  31. Specify the ID of the required
  32. GPU (see General Information on Configuring Detection) and press 
  33. Nvidia GPU (see Selecting Nvidia GPU when configuring detectors).
  34. Press Enter.

Optimizing the operation of the neural analytics on analytics on GPU is complete. The utility will create the caches of four neural networks included in the Neuro Pack add-ons:

  • GeneralNMHuman_v1.0GPU_onnx.ann—human;
  • smokeScanned_v1_onnx
  • .ann—smoke
  • .ann (or bestSmoke_v1.ann starting with Detector Pack 3.14)—smoke detection;
  • fireScanned_v1_onnx.
  • ann—fire
  • ann (or bestFire_v1.ann starting with Detector Pack 3.14) —fire detection;
  • reid_15_0_256__osnetfpn_segmentation_noise_20_common_29_onnx.
  • ann—search
  • ann—search for the similar in the Neural
  • Tracker
  • tracker (see
  • Image
  • Similitude search).
  • Image Removed
Note
titleAttention!

The cache must be recreated in the following cases:

  • if you update the Neuro Pack add-ons (
  • see 
  • see Installing DetectorPack add-ons),
  • if you change the
  • NVIDIA
  • Nvidia GPU model,
  • if you update the
  • NVIDIA
  • Nvidia GPU drivers.

Creating GPU neural network caches using parameters

...

  1. -p is a parameter to create a cache for a particular neural network.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann"

    To create a cache for multiple neural networks, list the paths to the selected neural networks, separated by a space.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann" "C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroSDK\WaterLevelRuleNet_origin_onnx.ann"
  2. -v is a parameter to output the procedure log to the console during cache generation.
    Command example to automatically create caches of four neural networks included in the Neuro Pack add-ons with log output:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -v

    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe -p "<System disk>\<Neural network location directory>\Neural_network_name.ann" -v
  3. --int8=1 is a parameter to create

  4. a quantized version of the
  5. cache for those neural networks for which quantization is available. Neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file. By default,

  6. the
  7. the --int8=0 parameter is disabled.
    Command example:

    Code Block
    C:\Program Files\Common Files\AxxonSoft\DetectorPack\NeuroPackGpuCacheGenerator.exe --int8=1
  8. Note
    titleAttention!
    The neural networks for which the quantization mode is available are included in the Neuro Pack add-ons together with the *.info file.

The neural networks for which the quantization mode is available (see

...

Neural trackerStopped object detector, Neural counter):

  • GeneralNMCar_v1.0GPU_onnx.ann

...

  • —Vehicle.
  • GeneralNMHuman_v1.0GPU_onnx.ann

...

  • —Person.
  • GeneralNMHumanTopView_v0.8GPU_onnx.

...

  • ann—Person (top-down view).

Starting with DetectorPack 3.11, the following neural networks were added:

  • GeneralNMHumanAndVehicle_Nano_v1.0_GPU_onnx.ann

...

  • —Person and vehicle (Nano).
  • GeneralNMHumanAndVehicle_Medium_v1.0_GPU_onnx.ann

...

  • —Person and vehicle (Medium).
  • GeneralNMHumanAndVehicle_Large_v1.0_GPU_onnx.ann—Person and vehicle (Large).

Starting with DetectorPack 3.12, the following neural networks were added:

  • GeneralNMHumanTopView_Nano_v1.0_GPU_onnx.ann—Person (top-down view Nano).
  • GeneralNMHumanTopView_Medium_v1.0_GPU_onnx.ann—Person (top-down view Medium).
  • GeneralNMHumanTopView_Large_v1.0_GPU_onnx.ann—Person (top-down view
  • —human and vehicle (
  • Large).