Documentation for Intellect 4.11.0-4.11.3. Documentation for other versions of Intellect is available too.

Previous page Next page


You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

There are 3 types of object classification by the smart detectors in Intellect.

  1. Standard. The following objects can be classified: a human, a vehicle, an other object (see Cross line detector configuration and Motion in the Area detector configuration).
  2. Advanced. The following objects can be classified: a human, a group of humans, a vehicle, noise, an object carried into the area, an object carried out of the area, an other object (see the CAM_VMDA_DETECTOR section of the Programming Guide).
  3. Neural filter — a neural network-based classification. Any objects can be classified to a high precision. The neural network is trained individually for each use case.

Prior to configuring the neural filter, it is recommended to contact the AxxonSoft technical support and request the trained neural networks model files. Technical support specialists will request from you the data needed to prepare the models, and then provide you with the files. These files should be distributed among all the servers where the neural filter will be used.

In most cases, one neural networks model is enough for the standard objects classification (e.g. a human / a vehicle). However, for the non-standard tasks with multiple object classes, more than one model may be required:

  • if there are different classification tasks: for example, for some cameras the human/vehicle object classification is required, and for other cameras, the vehicle type (passenger car/freight vehicle) classification is required;
  • if different conditions apply to different cameras, and the tracker will work better if each neural network is trained to work under different conditions: for example, the task is to protect the factory perimeter (only the human and vehicle detection is required) and the factory workshop (only the people in special work clothes detection is required).

The neural filter is configured in the following way:

  1. Install the Detector Pack subsystem (if not already installed). The Detector Pack subsystem installation guide is available in the corresponding documentation section, the most current version of the documentation is in the AxxonSoft documentation repository.
  2. For the the VMDAEXT key, specify the 1 value (for details on the key, see Registry keys reference guide).
  3. For the the VMDAEXT.RAM key, specify the value greater than 2000 but less than 5000. It is recommended to specify 4000, depending on the computer hardware resources.
  4. Go to the Basic settings tab on the Tracker object settings panel and make sure that the Sensitivity slider is in the leftmost position (i.e. auto mode is on).
  5. On the Tracker object settings panel, open the Neurofilter tab (1).
  6. Set the Use neurofiltering (2) checkbox.
  7. In the Tracking model field (3), enter the full path to the tracker model file received from the AxxonSoft technical support, or click the ... button and select the file in the standard Windows dialog box.
  8. In the Tracking device name field (4), enter the name of the device that should be used by the tracker for the objects classification:
    1. CPU — use the CPU.
    2. GPU0, GPU1, GPU2 ... — use the NVIDIA GPU. Usually GPUs are recognized in the system in the order of their physical installation: the first (usually the upper one) GPU is number 0, the middle one is number 1, and the last (usually the lower one) is number 2.

      Note

      If there are NVIDIA GPUs in the system, it is recommended to use them. If there are no NVIDIA GPUs in the system, th CPU resources should be used. GPUs from other manufacturers are not supported.

      Note

      In the 64-bit version of Intellect (Intellect64.exe), the the tracking device name is selected from the drop-down list of the processors and GPUs available on the computer.

  9. In the Unattended objects model field (5), enter the full path to the abandoned objects detection model file received from the AxxonSoft technical support, or click the ... button and select the file in the standard Windows dialog box.
  10. In the Unattended objects device name field (6), enter the name of the device that should be used by the tracker for the abandoned objects classification.

    Note

    For the unattended objects neural filter operation, it is necessary that the unattended objects detector of the Tracker object is enabled, and the VMDA detectors are configured appropriately (see Creating and Configuring the Tracker Object and Creating and Configuring VMDA Detection).

    Note

    In the 64-bit version of Intellect (Intellect64.exe), the unattended objects device name is selected from the drop-down list of the processors and GPUs available on the computer.

  11. Click the Apply button (7).

    Attention!

    Each tracker with configured neural filter uses about 900 MB of video memory. If you are using several neurotrackers which in total consume more video memory than is available in the system, an error will occur. In cases when there is not enough video memory, it is recommended to use several video cards in one system.

The neural filter is configured.

  • No labels