Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Tip

Video requirements for scene analytics detection tools

Video stream and scene requirements for Neurotracker operation

Objects image requirements for Neurotracker

Hardware requirements for neural analytics operation

To configure the neural tracker-based Scene Analytics detection tools based on Neurotracker, do the following:

  1. Select the Neurotracker object. 
    Image Modified
  2. By default, metadata are recorded into the database. To disable metadata recording, select No from the Record object objects tracking (1) list.
  3. If the camera supports multistreaming, select the stream for which detection is needed (2). 
  4. To reduce the number of false positives from a fish-eye camera, you have to position it properly (3). For other devices, this parameter is not valid.
    Image Removed

  5. The Decode key frames parameter (3) is enabled by default. In this case, only key frames are decoded. To disable decoding, select No in the corresponding field. Using this option reduces the load on the Server, but at the same time the quality of detection is naturally reduced. We recommend enabling this parameter for "blind" (without video image display) Servers, on which you want to perform detection. For MJPEG codec decoding isn’t relevant, as each frame is considered a key frame.

    Note
    titleAttention!

    The Number of frames processed per second and Decode key frames parameters are interconnected.

    If there is no local Client connected to the Server, the following rules work for remote Clients:

    • If the key frame rate is less than the value specified in the Number of frames processed per second field, the detection tool will work by key frames.
    • If the key frame rate is greater than the value specified in the Number of frames processed per second field, the detection will be performed according to the set period.

    If a local Client connects to the Server, the detection tool will always work according to the set period. After a local Client disconnects, the above rules will be relevant again.

    Image Added

  6. In the Decoder mode field (4), select Select a processing resource for decoding video streams (4). When you select a GPU, a stand-alone graphics card takes priority (when decoding with NVIDIA NVDEC chips). If there is no appropriate GPU, the decoding will use the Intel Quick Sync Video technology. Otherwise, CPU resources will be used for decoding.
  7. Set the Detection threshold for objects in percent (5). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some triggers may not be considered.
  8. You can use the neurofilter to sort out certain tracks. For example, the neurotracker detects all freight trucks, and the neurofilter sorts out only video recordings that contain trucks with cargo door open. To set up a neurofilter, do the following:

    1. to use the neurofilter, select Yes in the corresponding field (7).
      Image Added

    2. in the Neurofilter file field (8), select a neural network file.
    3. in the Neurofilter mode field (5), select a processor to be used for neural network work (see General information on configuring detection).

  9. In the Number of frames processed field (6), specify the number of frames Set the frame rate value for the neural network to process per second (6). The higher the value, the more accurate tracking, but the load on the CPU is also higher.

    Note
    titleAttention!

    6 FPS or more is recommended. For fast moving objects (running individuals, vehicles), you should must set the frame rate at 12 FPS or above (see Examples of configuring Neurotracker for solving typical tasks).

  10. Specify the Minimum number of detection triggers for the neural tracker to display the object's track (7). The higher the value, the more is the time interval between the object's detection and the display of its track on the screen. Low values may lead to false positives.
  11. If you use a unique neural network, select the corresponding file (8).

    Note
    titleAttention!
    • To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
    • A trained neural network for a particular scene allows you to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.).
    • If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (13) and the selected processor for the neural network operation (4). If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
    • To ensure the correct operation of the neural network on Linux OS, the corresponding file should be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. 

  12. Set the Detection threshold for objects in percent (9). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy, but some triggers may not be considered.
    Image Added
  13. In the Neurotracker mode field (10), select

  14. You can use the neural filter to sort out certain tracks. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:

    1. to use the neural filter, select Yes in the corresponding field (9).

    2. in the Neurofilter file field, select a neural network file (10).
    3. in the Neurofilter mode field, select a processor to be used for neural network work (11, see General information on configuring detection).

  15. Select the processor for the neural network—CPU, one of NVIDIA GPUs, or one of Intel GPUs (12, see Hardware requirements for neural analytics operation, General information on configuring detection).

    Note
    titleAttention!
    • We recommend using the GPU. It may take several minutes to launch the algorithm on NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • If Neural Tracker neurotracker is running on GPU, object tracks may be lagging behind the objects in the surveillance window. If this happens, set the camera buffer size to 1000 milliseconds (see The Camera object).
    • Starting with Detector Pack 3.11, Intel HDDL and Intel NCS aren’t supported.


  16. In the Object type field (1311), select the recognition object:

    1. Human.
    2. Human (top view).
    3. Vehicle.
    4. Human and Vehicle (Nano)low accuracy, low processor load.
    5. Human and Vehicle (Medium)medium accuracy, medium processor load.
    6. Human and Vehicle (Large)high accuracy, high processor load.

  17. To

    enable the search for similar persons

    eliminate false positives when using a fisheye camera, in the

    Similitude search

    Camera position field (

    14

    12), select

    Yes. It increases the processor load.
    Note
    titleAttention!

    The Similitude search works only on tracks of people.

    In the Time of processing similitude track (sec) field (15), set the time in the range [0; 3600] required for the algorithm to process the track to search for similar persons.

    the correct device location. For other devices, this parameter is irrelevant.

    Image Added
  18. If you don't need to detect moving objects, select Yes in the Hide moving objects field (1613). An object is treated as static if it does not doesn't change its position more than 10% of its width or height during its track lifetime.Image Removed
  19. If you don't need to detect static objects, select Yes in the Hide

    stationary

    static objects field (

    17

    14). This parameter lowers the number false positives when detecting moving objects. An object is considered

    stationary

    static if it has not moved more than 10% of its width or height during the whole time of its track existence.

    Note
    titleAttention!

    If a stationary static object starts moving, the detection tool will trigger, and the object will no longer be considered stationarystatic.


  20. Specify the Minimum number of detection triggers for the neurotracker to display the object's track (15). The higher the value, the more is the time interval between the object's detection and the display of its track on the screen. Low values of this parameter may lead to false positives.
  21. If necessary, enable the Model quantization option parameter (1816). It allows you to reduce the consumption of the GPU processing power.

    Note
    titleAttention!

    AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object with quantization. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.

    Model quantization is only applicable to NVIDIA GPUs.

    The first launch of a detection tool with quantization enabled may take longer than a standard launch.

    If GPU caching is used, next time a detection tool with quantization will run without delay.


  22. If you use a unique neural network, select the corresponding file (17).

    Note
    titleAttention!
    • To train your neural network, contact AxxonSoft (see Data collection requirements for neural network training).
    • A trained neural network for a particular scene allows you to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.).
    • If the neural network file is not specified, the default file will be used, which is selected automatically depending on the selected object type (11) and the selected processor for the neural network operation (4). If you use a custom neural network, enter a path to the file. The selected object type is ignored when you use a custom neural network.
    • To ensure the correct operation of the neural network on Linux OS, the corresponding file must be located in the /opt/AxxonSoft/DetectorPack/NeuroSDK directory. 


  23. If necessary, specify the class of the detected object (1918). If you want to display tracks of several classes, specify them separated by a comma with a space. For example, 1, 10.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    1. If you leave the field blank, the tracks of all available classes from the neural network will be displayed (811, 13 17).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes will be displayed (811, 13 17).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network will be displayed (811, 13 17).
    4. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network will be displayed (811, 13 17).

      Info
      titleNote

      Starting with Detector Pack 3.10.2, if you specify a class/classes missing from the neural network, the tracks won’t be displayed (8, 13)11, 17).


  24. To enable the search for similar persons, in the Similitude search field (19), select Yes. If you enabled the parameter, it increases the processor load.

    Note
    titleAttention!

    The Similitude search works only on tracks of people.


  25. In the Time of processing similitude track (sec) field (20), set the time in the range [0; 3600] required for the algorithm to process the track to search for similar persons.
  26. In the Time period of excluding static objects field (21), set the time in seconds after which the track of the static object is hidden. If the value of the parameter is 0, the track of the static object isn't hidden.
  27. In the Track retention time field (22), set a the time interval in seconds after which the object track is considered lost (20). This helps if objects in scene temporarily overlap each other. For example, a larger vehicle may completely block the smaller one from view. 
  28. By default, the entire FOV is a detection area. If you need to narrow down the area to be analyzed, you can set one or several detection areas in the preview window.

    Info
    titleNote

    The procedure of setting areas is identical to the primary basic tracker's (see Setting General Zones for Scene analytics detection tools). The only difference is that the neural tracker neurotracker areas are processed while the primary basic tracker areas are ignored.


  29. Click the Apply button.
  30. The next step is to create and configure the necessary detection tools on the basis of neural trackerneurotracker. The configuration procedure is the same as for the primary basic tracker (see Setting up Tracker-based Scene Analytics detection tools).

    Note
    titleAttention!
    • To trigger a Motion in Area detection tool under a neural network trackeron the basis of neurotracker, an object should must be displaced by at least 25% of its width or height in FOV.
    • The abandoned objects detection tool works only with the primary basic object tracker.