Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Section


Column
width50%


Panel
borderColor#CCCCCC
bgColor#FFFFFF
titleBGColor#F0F0F0
borderStylesolid
titleOn the page:
Table of Contents



Column


The Neurotracker module program module registers object tracks in the camera FOV during recording using a the neural network and saves them to the VMDA metadata storage (see Creating and configuring VMDA metadata storage).
Configuration
The configuration of the Neurotracker program module includes: main and additional settings of the detection tooldetector, selection of the area the area of interest, configuration of the neurofilter configuration

You can configure the Neurotracker program module on the settings panel of the the Neurotracker object that is created on the basis of the Camera object on the Hardware tab of the System settings dialog window.

Main settings of the

...

detector

You can configure the main settings of the detection tool detector on the Main settings tab on the settings panel of the Neurotracker object.

  1. Set the Generate event on appearance/disappearance of the track checkbox to generate an event when an object (track) appears in the frame and disappears from the frame.

    Info
    titleNote

    The track appearance/disappearance events are generated only in the debug window (see Start the debug window). They

  2. are not
  3. aren't displayed in the

  4. Event
  5. event viewer.

  6. Set the Show objects on image checkbox to highlight the detected object with a frame when viewing live video.
  7. Set the Save tracks to show in archive checkbox to highlight the detected object with a frame when viewing the archive.

    Info
    titleNote

    This parameter

  8. does not
  9. doesn't affect the VMDA search and is used just for the visualization. For this parameter,

  10. the 
  11. the titles database is used.

  12. Set the Model quantization checkbox to enable model quantization. By default, the checkbox is
  13. clear
  14. cleared. This parameter allows you to reduce the consumption of the GPU processing power.
    Info
    titleNote
    1. AxxonSoft conducted a study in which a neural network model was trained to identify the characteristics of the detected object. The following results of the study were obtained: model quantization can lead to both an increase in the percentage of recognition and a decrease. This is due to the generalization of the mathematical model. The difference in detection ranges within ±1.5%, and the difference in object identification ranges within ±2%.
    2. Model quantization is only applicable for NVIDIA GPUs.
    3. The first launch of
  15. a detection tool
    1. the detector with quantization enabled
  16. may
    1. can take longer than
  17. a
    1. the standard launch.
    2. If GPU caching is used, the next time
  18. a detection tool
    1. the detector with quantization will run without delay.
  19. From the Object type drop-down list, select the object type for analysis:
    • Human
  20. —the camera
    • —camera is
  21. directed
    • pointed at
  22. a
    • the person at the angle of 100-160°;
    • Human (top-down view)—camera is pointed at the person from above at a slight angle;
    • People view
  23. )—the camera is directed at a
    • from above (Nano)—camera is pointed at the person from above at a slight angle, small network size;
    • People view from above (Medium)camera is pointed at the person from above at a slight angle, average network size;
    • People view from above (Large)camera is pointed at the person from above at a slight angle, large network size;
    • Vehicle
  24. —the camera
    • —camera is
  25. directed
    • pointed at
  26. a
    • the vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)
  27. —person
    • —detects person and vehicle
  28. recognition
    • , small
  29. neural
    • network size;
    • Person and vehicle (Medium)
  30. —person
    • detects person and vehicle
  31. recognition
    • ,
  32. medium neural
    • average network size;
    • Person and vehicle (Large)
  33. —person
    • detects person and vehicle
  34. recognition
    • , large
  35. neural
    • network size.
      Info
      titleNote

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of the object recognition but the greater the load on the CPU.

  36. By default, the standard
  37. (default)
  38. neural network is initialized according to the selected object
  39. selected in the Object type drop-down list and the device selected in the Device drop-down list. The standard neural networks for different processor types are selected automatically. If you use a custom neural network, click the Image Removed
  40. type on step 5 and device on step 7. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the Image Added button to the right of the Tracking model field
  41.  
  42. and specify its file in the standard Windows Explorer window
  43. , specify the path to the file
  44. that opens.
    Note
    titleAttention!

    To train

  45. a
  46. the neural network, contact

  47. the 
  48. AxxonSoft technical support

  49.  
  50. (

  51. see
  52. see Data collection requirements for neural network training).

  53. A
  54. The use of the trained neural network

  55. trained
  56. for a

  57. specific
  58. particular scene allows you to detect only objects of a certain type

  59. only
  60. (for example, a person, a cyclist, a motorcyclist, and so on).

  61. From
  62. the 
  63. the Device drop-down list, select the
  64. device
  65. one on which
  66. the neural
  67. the neural network
  68.  
  69. will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)
  70. —the device
  71. —device is selected automatically:
  72.  
  73. The NVIDIA
  74.  
  75. GPU
  76. gets
  77. takes the highest priority,
  78. followed by
  79. then goes the Intel GPU, and then the CPU.
    Note
    titleAttention!
    1. We recommend using the GPU.
    2. It
  80. may
    1. can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    2. In the Detector Pack 2.0subsystem, the Intel HDDL support is removed. Thus, when you update from the 1.0 version, the Not supported option is automatically selected instead of this device option, and detectors won't operate. To resume detector operation, select the required device from the list.
  81. From the Process drop-down list, select which objects must be processed by the neural network:
    • All objects—moving and stationary objects;
    • Only moving objects—an object is considered to be moving if, during the entire lifetime of its track, it
  82. has
    • shifted by more than 10% of its width or height.
  83. Using
    • If you use this parameter, you can reduce the number of false positives;
    • Only stationary objects—an object is considered stationary if, during the entire lifetime of its track, it
  84. has
    • shifted by no more than 10% of its width or height. If
  85. a
    • the stationary object starts moving, the
  86. detection tool triggers
    • detector generates an event, and the object is no longer considered stationary.
      Info
      titleNote

      The selection of only moving objects and only stationary objects isn't mutually exclusive, as some tracks cannot be determined as either moving or stationary. First, the neural network detects all objects, and after that, the detector filters out unnecessary tracks in accordance with the selected value of the Process setting.

  87. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part gets into the area of interest specified in the
  88. detection tool
    1. detector settings.
    2. Ceiling—objects are detected even if their lower part doesn't get into the area of interest specified in the
  89. detection tool
    1. detector settings.

Selecting the area of interest

  1. Click the Settings button.
  2. The Detection
  3. As a result, the detector settings window opens.
    Image Modified
  4. Click
  5. In the Detection settings window, click the Stop video button (1) to pause the playback and capture the frame of the video image.
  6. Click the Area of interest button (2) to specify the area of interest. The button
  7. will be
  8. is highlighted in blue.
    Image Modified
  9. On the captured frame of the video image, use the mouse to sequentially set the anchor points of the area (1)
  10. ,
  11. in which the objects
  12. will be
  13. are detected. The rest of the frame
  14. will be
  15. is faded.
  16. You
  17. There can
  18. add 
  19. be only one area of interest. To delete an area, click the Image Modified button. If you don't specify the area of interest, the entire frame is analyzed.
  20. Click the OK button (2) to close the Detection settings window and return to the settings panel of the
  21. Neurotracker object
  22. detector.

Additional settings

  1. Go to the Additional settings tab on the settings panel of
  2. the Neurotracker object
  3. the neurotracker.
    Image Modified
  4. In the Recognition threshold [0,100] field, specify the neurocounter sensitivity—an integer

  5. value
  6. number in the range from 0 to 100.

    Info
    titleNote

    The neurotracker sensitivity is determined experimentally. The lower the sensitivity, the higher the probability of false alarms. The higher the sensitivity, the lower the probability of false alarms, however, some useful tracks can be skipped (see Examples of configuring neural tracker for solving typical tasks).

  7. In the Frames processed per second [0.016, 100] field, specify the number of frames processed per second by the neural network in the range from 0.016 to 100. For all other frames the interpolation
  8. will be
  9. is performedfinding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the
  10. detection tool operation
  11. tracking, but the higher the load on the processor.
    Info
    titleNote

    The recommended value is at least 6 FPS. For fast moving objects (running person, vehicle)—at least 12 FPS (see Examples of configuring neural tracker for solving typical tasks).

  12. In the Minimum number of triggering [2, 100] field, specify the minimum number of the neurotracker
  13. triggers required
  14. triggerings to display the object track. The higher the value of this parameter, the longer it takes from the object detection moment to the display of its track.
  15. A
  16. The low value of this parameter can lead to false positives. The default value is 6. The value range is from 2
  17. -
  18. to 100. The entered
  19. value
  20. number that is greater than the maximum value or less than the minimum value from the specified range
  21. ,
  22. is automatically adjusted to the maximum or minimum value, respectively.
  23. In the Track hold time (s) field, specify the time in seconds after which the object track is considered lost in the range from 0.3 to 1000. This parameter is useful in situations

  24. where
  25. when one object in the frame temporarily overlaps another. For example, when a large vehicle completely overlaps a small one.

    Info
    titleNote

    If an object (track) is close to the frame boundary, then approximately half of the time specified in

  26. the
  27. the Track hold time (s) field must elapse from the moment the object disappears from the frame until its track is deleted.

  28. Set
  29. the 
  30. the Scanning mode
  31.  
  32. checkbox to detect small objects. If you enable this mode, the load on the system increases.
  33. So
  34. That is why, on step 3, we recommend specifying a small number of frames
  35. processed per second in the Frames
  36. processed per second
  37. [0
  38. .
  39. 016, 100] field.
  40. By default, the checkbox is
  41. clear
  42. cleared. For more information on the scanning mode, see Configuring the Scanning mode.
  43. If necessary, specify the class of the detected object in the Target classes field. If you want to display tracks of several classes, specify them separated by a comma with a space. For example,
  44.  1
  45.  1,
  46.  10
  47.  10.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top view), 10—Vehicle.
    Info
    titleNote
    1. If you leave the field blank, the tracks of all available classes from the neural network
  48. will be
    1. are displayed (Object
  49.  
    1. typeNeural network file).
    2. If you specify a class/classes from the neural network, the tracks of the specified class/classes
  50. will be
    1. are displayed (Object
  51.  
    1. typeNeural network file).
    2. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the tracks of a class/classes from the neural network
  52. will be
    1. are displayed (Object
  53.  
    1. typeNeural network file).
    2. If you specify a class/classes missing from the neural network, the tracks of all available classes from the neural network

  54. will be
    1. are displayed (Object

  55.  
    1. typeNeural network file).

Neurofilter

You can use the neurofilter to sort out some of the tracks. For example, the neurotracker detects all freight trucks, and the neurofilter leaves only those tracks that correspond to trucks with cargo door doors open. To configure a the neurofilter, do the following:

  1. Go to the Neurofilter tab on the settings panel of
  2. the Neurotracker object
  3. the neurotracker.
    Image Modified
  4. Set the Enable filtering checkbox to enable the neurofilter. By default, the checkbox is
  5. clear
  6. cleared.
  7. By default, the standard
  8. (default)
  9. neural network
  10. is initialized
  11. according to
  12. the device selected in the Device drop-down list. The standard neural
  13. the selected device on step 4 is initialized. You must not select manually standard networks for different processor
  14. types are selected
  15. types since it is performed automatically. If you
  16. use a custom
  17. have the unique neural network for use, click
  18. the Image Removed button
  19. the Image Added button to the right of
  20. the 
  21. the Tracking model
  22.  field and
  23.  field and specify its file in the standard Windows Explorer window
  24. , specify the path to the file
  25. that opens.
    Note
    titleAttention!

    To train

  26. a
  27. the neural network, contact

  28. the 
  29. AxxonSoft technical support

  30.  
  31. (

  32. see
  33. see Data collection requirements for neural network training).

  34. A
  35. The use of the trained neural network

  36. trained
  37. for a

  38. specific
  39. particular scene allows you to detect only objects of a certain type

  40. only
  41. (for example, a person, a cyclist, a motorcyclist, and so on).

  42. From the Device drop-down list, select the
  43. device
  44. one on which
  45. the neural
  46. the neural network
  47.  
  48. will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs
  49. . Auto (default value)—the device is selected automatically: NVIDIA GPU gets the highest priority, followed by Intel GPU, then CPU.
  50. .
    Info
  51. note
  52. title
  53. Attention!
  54. Note
    1. The device for the neurofilter must match the device specified for the neurotracker in
  55. the Device drop-down
    1. step 7 of the main settings.
  56.  
    1. It
  57. may
    1. can take several minutes to launch the algorithm on the NVIDIA GPU after you apply the settings.
  58. Click the Apply button to save the changes.

    Info
    titleNote

    If necessary, create and configure the

  59. NeuroTracker
  60. Neurotracker VMDA

  61. detection tools
  62. detectors on the basis of the Neurotracker object. The procedure of creating and configuring the

  63. NeuroTracker
  64. Neurotracker VMDA

  65. detection tools
  66. detectors is similar to creating and configuring the VMDA

  67. detection tools
  68. detectors for

  69. a
  70. the regular tracker. The only difference is that

  71. it is necessary to
  72. you must create the

  73. NeuroTracker
  74. Neurotracker VMDA

  75. detection tools
  76. detectors on the basis of

  77. the
  78. the Neurotracker object

  79. ,
  80. and not on the basis of the Tracker object (see Creating and configuring the VMDA detection). Also,

  81. if
  82. when you select the Staying in the area for more than 10 sec detector type, the time the object stays in the zone, after which the NeuroTracker VMDA

  83. detection tools are triggered
  84. detectors generate an event, is configured using the LongInZoneTimeout2 registry key, not the LongInZoneTimeout. The

  85. procedure of configuring the
  86. alarm generation mode is set for any type of VMDA

  87. detection tools is
  88. detector similar to the VMDA

  89. detection tools
  90. detector for

  91. a
  92. the regular tracker using the VMDA.oneAlarmPerTrack registry

  93. key 
  94. key (see Registry keys reference guide).

    Image Modified

Configuration of the Configuring the Neurotracker program module is complete.

Tip

If events are periodically received from several objects, then for convenience, you can create and configure we recommend creating and configuring the neurotracker track counters (see Configuring the neurotracker track counter).