Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Stop the server (see Stop the server).
  2. Run the GPU cache generator utility from the Start menu → ProgramsGPU cache generator. When you launch the utility, a window opens with the following warning: Attention. Please stop VMS server and other services that use GPU resources. This is critical to ensuring maximum efficiency of the cache generation process. If you don’t stop the server and services, the utility can continue to work, but the caching result can be less effective, and the process is slower due to competition for GPU resources.
  3. To confirm stopping all applications using the GPU, click the Yes, I have closed all applications that use GPU button.
  4. Set the checkboxes next to the detectors/neural models for which you want to create a cache. The utility window is divided into two areas:
    • Detectors—list of available detectors.
    • Neural modelslist of neural networks for which you can generate a cache.

      Elements in these areas are interconnected: when you select a detector on the left, all neural networks associated with it are displayed on the right, and vice versa.
      The table lists the detectors and neural networks associated with them:
      DetectorNeural models

      Barcode detector

      GeneralNM barcodes

      Equipment detector

      • Ppe helmet (head) general
      • Ppe safety vest (body) general
      • Ppe segmentation by pose origin

      Fire detector

      • Best fire v1 (Normal mode)
      • Fire scanned v1 (Scanning mode)

      Meta-detector

      • Blip img only
      • Blip text only

      Neural counter or Stopped object detector

      • GeneralNM car v1.0
      • GeneralNM human v1.0
      • GeneralNM human and vehicle large v1.0
      • GeneralNM human and vehicle medium v1.0
      • GeneralNM human and vehicle nano v1.0
      • GeneralNM human top view large v1.0
      • GeneralNM human top view medium v1.0
      • GeneralNM human top view nano v1.0
      • GeneralNM human top view v0.8

      Neural tracker

      • GeneralNM car v1.0
      • GeneralNM human v1.0
      • GeneralNM human and vehicle large v1.0
      • GeneralNM human and vehicle medium v1.0
      • GeneralNM human and vehicle nano v1.0
      • GeneralNM human top view large v1.0
      • GeneralNM human top view medium v1.0
      • GeneralNM human top view nano v1.0
      • GeneralNM human top view v0.8
      • Dpe 1638 light pa 100 k (Attributes recognition)
      • Reid 15 0 256 osnetfpn segmentation noise 20 common 29 (Similitude search)

      Human pose detector

      • General human pose estimation
      • General human pose estimation yolov8 large
      • General human pose estimation yolov8 medium
      • General human pose estimation yolov8 nano

      Person-based privacy masking

      General human pose estimation

      Privacy masking


      Privacy masking origin

      Smoke detector

      • Best smoke v1 (Normal mode)
      • Smoke scanned v1 (Scanning mode)

      Water level detector

      Water level rule net origin

      Custom neural networks

      When you use a custom neural network, you must specify the path to the file in *.ann or *.annext format, provided that this neural network can be run on the GPU

      Note
      titleAttention!
      • A trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on). To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
      • If you use a standard neural network (training wasn't performed in operating conditions), we guarantee an overall accuracy of 80-95% and a percentage of false positives of 5-20% (see Data collection requirements for neural network training).
      • You cannot specify the network file in Windows OS. You must place the neural network file locally, that is, on the same server where you install Axxon One.
      • For correct neural network operation on Linux OS, place the corresponding file locally in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory or in the network folder with the corresponding access permissions.
      • When you run the GPU cache generator utility again, the file of the custom neural network isn't displayed in the Neural models list.


  5. Click the button in the lower right part of the window.
  6. Configure the cache generation parameters specified in the table.
    ParameterValueDescription
    Graphics processors for performing operations
    NVIDIA <model> (see List of Nvidia GPUs)

    Image Modified

    Note
    titleAttention!

    Cache generation is only supported for NVIDIA graphics cards, as TensorRT technology doesn't support other graphics cards.

    Set the checkbox next to the video card for which the cache is created

    Image Modified

    Additional parameters
    Enable int8 calibration

    Image Modified

    Note
    titleAttention!
    • This parameter is available only for neural networks for which the quantization mode is available and which are included in the neural analytics package along with the *.info file of the same name:
      • GeneralNM car v1.0,
      • GeneralNM human v1.0,
      • GeneralNM human and vehicle large v1.0,
      • GeneralNM human and vehicle medium v1.0,
      • GeneralNM human and vehicle nano v1.0,
      • GeneralNM human top view large v1.0,
      • GeneralNM human top view medium v1.0,
      • GeneralNM human top view nano v1.0,
      • GeneralNM human top view v0.8.
    • If you select neural networks for which the quantization mode isn't available in the previous window, then a cache will not be generated for them. 

    By default, the checkbox is clear. To enable the Int8 quantization mode for a neural network, set the checkbox

    Image Modified

    Enable verbose logging mode

    Image Modified

    By default, the checkbox is clear. To enable logging of the process of initialization and cache generation, set the checkbox

    Info
    titleNote
    • Enabling the parameter provides detailed information about the cache generation process but increases the volume of logs and can slow down the generation process.
    • Logs for each neural network are saved in a separate file in the directory C:\Users\<username>\.gpuCacheGenerator\logs.
    • Previous logs are automatically deleted each time you run the utility.

    Image Modified

    The cache will be saved

    Select a directory to store the cache for all used detectors and neural networks. The approximate cache size depends on the number and type of neural networks used. The minimum size is 70 MB.

    • If you don’t specify the GPU_CACHE_DIR system variable, by default, the cache is saved in the directory: C:\Users\<user_name>\.gpuCacheGenerator\ (see
    Appendix 9.
    • Creating system variable).
    • If you specify the GPU_CACHE_DIR system variable, the cache is saved at the path specified in it.
    • When you select a cache directory via the utility, the value of the GPU_CACHE_DIR system variable is updated to the selected path

  7. Click the button in the lower right part of the window to proceed to generating a cache for all selected neural networks. If you select several neural networks, they are processed one after another.
    The current progress status is displayed for each neural network. Possible statuses:
    • Ready (the line is outlined in green).
    • In progress.
    • In queue.
    • Error (the line is outlined in red).
  8. When the generation process is complete, click the button in the lower right part of the window.

...

The repair process is complete.

Remove the utility

To remove the utility:

  1. Open the .msi file of the utility.
  2. In the window that opens, click the Next button.
  3. Click the Remove button.
  4. In the window that opens, confirm the removal by clicking the Remove button.
  5. Wait for the utility to complete the removal process. When the process is complete, a new window opens informing you that the utility is removed.
  6. Click the Finish button.

...