Go to documentation repository
Install the utility
The GPU cache generator utility is used to pre-create a cache of neural networks that are used in the operation of detectors.
To install the utility:
- Download the utility file from the AxxonSoft official website.
- Run the downloaded file as system administrator.
- In the setup window that opens, click the Next button.
- Click the Install button.
- Wait for the installation process to complete. After installation, a new window opens with a message about the completion of the utility installation.
- By default, the utility window opens after installation is complete. If you don’t want to launch the utility after installation is complete, clear the Launch the GPU cache generator checkbox.
- If you want to view the documentation for the GPU cache generator utility, set the Open the user's guide checkbox. By default, the checkbox is clear.
- Click the Finish button to confirm completion of the installation.
The installation of the GPU cache generator utility is complete.
Cache generation
To generate a cache:
- Stop the server (see Stop the server).
- Run the GPU cache generator utility from the Start menu → Programs → GPU cache generator. When you launch the utility, a window opens with the following warning: Attention. Please stop VMS server and other services that use GPU resources. This is critical to ensuring maximum efficiency of the cache generation process. If you don’t stop the server and services, the utility can continue to work, but the caching result can be less effective, and the process is slower due to competition for GPU resources.
- To confirm stopping all applications using the GPU, click the Yes, I have closed all applications that use GPU button.
- Set the checkboxes next to the detectors/neural models for which you want to create a cache. The utility window is divided into two areas:
- Detectors—list of available detectors.
- Neural models—list of neural networks for which you can generate a cache.
Elements in these areas are interconnected: when you select a detector on the left, all neural networks associated with it are displayed on the right, and vice versa.
The table lists the detectors and neural networks associated with them:Detector Neural models GeneralNM barcodes - Ppe helmet (head) general
- Ppe safety vest (body) general
- Ppe segmentation by pose origin
- Best fire v1 (Normal mode)
- Fire scanned v1 (Scanning mode)
- Blip img only
- Blip text only
- GeneralNM car v1.0
- GeneralNM human v1.0
- GeneralNM human and vehicle large v1.0
- GeneralNM human and vehicle medium v1.0
- GeneralNM human and vehicle nano v1.0
- GeneralNM human top view large v1.0
- GeneralNM human top view medium v1.0
- GeneralNM human top view nano v1.0
- GeneralNM human top view v0.8
- GeneralNM car v1.0
- GeneralNM human v1.0
- GeneralNM human and vehicle large v1.0
- GeneralNM human and vehicle medium v1.0
- GeneralNM human and vehicle nano v1.0
- GeneralNM human top view large v1.0
- GeneralNM human top view medium v1.0
- GeneralNM human top view nano v1.0
- GeneralNM human top view v0.8
- Dpe 1638 light pa 100 k (Attributes recognition)
- Reid 15 0 256 osnetfpn segmentation noise 20 common 29 (Similitude search)
- General human pose estimation
- General human pose estimation yolov8 large
- General human pose estimation yolov8 medium
- General human pose estimation yolov8 nano
General human pose estimation Privacy masking origin - Best smoke v1 (Normal mode)
- Smoke scanned v1 (Scanning mode)
Water level rule net origin Custom neural networks
When you use a custom neural network, you must specify the path to the file in *.ann or *.annext format, provided that this neural network can be run on the GPU
Attention!
- A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on). To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
- If you use a standard neural network (training wasn't performed in operating conditions), we guarantee an overall accuracy of 80-95% and a percentage of false positives of 5-20% (see Data collection requirements for neural network training).
- You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
- For correct neural network operation on Linux OS, place the corresponding file locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory or in the network folder with the corresponding access permissions.
- When you run the GPU cache generator utility again, the file of the custom neural network isn't displayed in the Neural models list.
- Click the button in the lower right part of the window.
- Configure the cache generation parameters specified in the table.
Parameter Value Description Graphics processors for performing operations NVIDIA <model> (see List of Nvidia GPUs) Attention!
Cache generation is only supported for NVIDIA graphics cards, as TensorRT technology doesn't support other graphics cards.
Set the checkbox next to the video card for which the cache is created
Additional parameters Enable int8 calibration Attention!
- This parameter is available only for neural networks for which the quantization mode is available and which are included in the neural analytics package along with the *.info file of the same name:
- GeneralNM car v1.0,
- GeneralNM human v1.0,
- GeneralNM human and vehicle large v1.0,
- GeneralNM human and vehicle medium v1.0,
- GeneralNM human and vehicle nano v1.0,
- GeneralNM human top view large v1.0,
- GeneralNM human top view medium v1.0,
- GeneralNM human top view nano v1.0,
- GeneralNM human top view v0.8.
- If you select neural networks for which the quantization mode isn't available in the previous window, then a cache will not be generated for them.
By default, the checkbox is clear. To enable the Int8 quantization mode for a neural network, set the checkbox
Enable verbose logging mode By default, the checkbox is clear. To enable logging of the process of initialization and cache generation, set the checkbox
Note
- Enabling the parameter provides detailed information about the cache generation process but increases the volume of logs and can slow down the generation process.
- Logs for each neural network are saved in a separate file in the directory C:\Users\<username>\.gpuCacheGenerator\logs.
- Previous logs are automatically deleted each time you run the utility.
The cache will be saved Select a directory to store the cache for all used detectors and neural networks. The approximate cache size depends on the number and type of neural networks used. The minimum size is 70 MB.
- If you don’t specify the GPU_CACHE_DIR system variable, by default, the cache is saved in the directory: C:\Users\<user_name>\.gpuCacheGenerator\ (see Creating system variable).
- If you specify the GPU_CACHE_DIR system variable, the cache is saved at the path specified in it.
- When you select a cache directory via the utility, the value of the GPU_CACHE_DIR system variable is updated to the selected path
- This parameter is available only for neural networks for which the quantization mode is available and which are included in the neural analytics package along with the *.info file of the same name:
- Click the button in the lower right part of the window to proceed to generating a cache for all selected neural networks. If you select several neural networks, they are processed one after another.
The current progress status is displayed for each neural network. Possible statuses:- Ready (the line is outlined in green).
- In progress.
- In queue.
- Error (the line is outlined in red).
- When the generation process is complete, click the button in the lower right part of the window.
Cache generation is complete. The created files are available for use by detectors.
Attention!
- When you generate a cache for a specific neural network again, the system attempts to use the existing cache.
- If the cache is missing or corrupted, a new file is created.
You must recreate the cache in the following cases:
- If you update the Neuro Pack add-ons (see Installing DetectorPack add-ons).
- If you change the Nvidia GPU model.
- If you update the Nvidia GPU drivers.
Repair the utility
You must repair the utility when its operation is disrupted due to changes in its working files or environment. To repair the utility:
- Open the .msi file of the utility.
- In the window that opens, click the Next button.
- Click the Repair button.
- In the window that opens, confirm the repair by clicking the Repair button.
- Wait for the utility to complete the repair process. When the repair is complete, a new window opens informing you that the utility is repaired.
- Click the Finish button.
The repair process is complete.
Remove the utility
To remove the utility:
- Open the .msi file of the utility.
- In the window that opens, click the Next button.
- Click the Remove button.
- In the window that opens, confirm the removal by clicking the Remove button.
- Wait for the utility to complete the removal process. When the process is complete, a new window opens informing you that the utility is removed.
- Click the Finish button.
The removal process is complete.
Note
You can also remove the utility from the Start menu or using third-party software.







