Installing the utility

The GPU Cache Generator utility is used to pre-create a cache of neural networks that are used in the operation of detectors.

To install the utility:

  1. Download the utility file from the AxxonSoft official website.
  2. Run the downloaded file.
  3. In the setup window that opens, click the Next button.

  4. Click the Install button.

  5. Confirm installation as administrator.
  6. Wait for the installation process to complete.

    After installation, a new window opens with a message about the completion of the utility installation.

  7. Click the Finish button to confirm completion of the installation.

    By default, the utility window opens after installation is complete. If you don’t want to launch the utility after installation is complete, clear the Launch the GPU cache generator checkbox.


The utility installation is complete.

Utility interface

When you launch the utility, a window opens asking you to stop the VMS server and other services that use GPU resources. This is required to correctly create an optimal cache. If you don’t stop the VMS server and related services, the utility can continue to work, but in this case the caching result can be less effective.

To view the main interface of the utility, click the Yes, I have closed all applications that use GPU button.

As a result, the main window of the utility is displayed. It is divided into two areas:

When you select a specific detector, all neural networks associated with the selected detector are automatically displayed on the right side. And when you select a specific neural network, the corresponding detector is automatically selected on the left side.

The table shows the match between detectors and neural networks:

DetectorNeural models

Barcode detection

GeneralNM barcodes

Equipment detector (PPE)

  • Ppe helmet (head) general
  • Ppe safety vest (body) general
  • Ppe segmentation by pose origin

Fire detector

Fire scanned v1

Meta detector

  • Blip img only
  • Blip text only

Neurocounter or Stopped object detector

  • GeneralNM car v1.0
  • GeneralNM human v1.0
  • GeneralNM human and vehicle large v1.0
  • GeneralNM human and vehicle medium v1.0
  • GeneralNM human and vehicle nano v1.0
  • GeneralNM human top view large v1.0
  • GeneralNM human top view medium v1.0
  • GeneralNM human top view nano v1.0
  • GeneralNM human top view v0.8

Neurotracker

  • GeneralNM car v1.0
  • GeneralNM human v1.0
  • GeneralNM human and vehicle large v1.0
  • GeneralNM human and vehicle medium v1.0
  • GeneralNM human and vehicle nano v1.0
  • GeneralNM human top view large v1.0
  • GeneralNM human top view medium v1.0
  • GeneralNM human top view nano v1.0
  • GeneralNM human top view v0.8
  • Dpe 1638 light pa 100 k (Person attribuetes recognition)
  • Reid 15 0 256 osnetfpn segmentation noise 20 common 29 (Similitude)

Pose detector

  • General human pose estimation
  • General human pose estimation yolov8 large
  • General human pose estimation yolov8 medium
  • General human pose estimation yolov8 nano

Person-based privacy masking 

General human pose estimation

Privacy masking detector


Privacy masking origin

Smoke detector

Smoke scanned v1

Water level detector

Water level rule net origin

Custom neural networks

Allows you to generate a cache of a custom neural network, provided that the neural network can be run on a GPU

Generating a cache

To generate a cache:

  1. In the right part of the main window of the utility, set the checkboxes next to the neural networks for which you want to create a cache.

  2. Click the button in the lower right part of the window to go to the cache generation settings.

  3. In the window that opens, set the checkbox next to the video card for which the cache is created.

    Cache generation is only supported for NVIDIA graphics cards, as TensorRT technology doesn’t support other graphics cards.

  4. Configure the parameters of cache generation specified in the table:
    ParameterValueDescription
    Enable int8 calibration

    This parameter is available only for neural networks with a corresponding *.info file.

    By default, the checkbox is clear. To enable the Int8 quantization mode for a neural network, set the checkbox. Neural networks for which the quantization mode is available are included in the neural analytics package along with the *.info file of the same name

    Enable verbose logging mode

    By default, the checkbox is clear. To enable logging of the process of initialization and cache generation, set the checkbox. Enabling the parameter provides detailed information about the cache generation process but increases the volume of logs and can slow down the generation process

    • Logs are saved in the directory C:\Users\<username>\.gpuCacheGenerator\logs.
    • Previous logs are automatically deleted each time you run the utility.


    The cache will be savedAXXONGPU

    Specify the directory of cache storage

    • If you don’t specify the GPU_CACHE_DIR system variable, by default, the cache is saved in the directory: C:\Users\<user_name>\.gpuCacheGenerator\ (see Appendix 9. Creating system variable).
    • If you specify the GPU_CACHE_DIR system variable, the cache is saved at the path specified in it.
    • When you select a cache directory via the utility, the value of the GPU_CACHE_DIR system variable is updated to the selected path.



  5. Click the button in the lower right part of the window to proceed to generating a cache for all selected neural networks.

    • If you select several neural networks, they are processed one after another.
    • The current progress status is displayed for each neural network.
  6. Wait for the generation process to complete. If the cache is successfully generated, the created files are available for use by detectors.

Cache generation is complete.

When regenerating the cache for a specific neural network, the system tries to use the existing cache. If the cache is missing or corrupted, a new file is created.