Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  1. Select the Neurotracker object. 
    Image Modified
  2. By default, metadata are recorded into the database. To disable metadata recording, select No (1) from the Record object tracking list.
  3. If a camera supports multistreaming, select the stream to apply the detection tool to (2). 
  4. To reduce false alarms rate from a fish-eye camera, you have to position it properly (3).  For other devices, this parameter is not valid.

  5. Set the recognition threshold for objects in percent (4). If the recognition probability falls below the specified value, the data will be ignored. The higher the value, the higher the accuracy — for the cost of sensitivity.
  6. Select Set the processor frame rate value for the neural network : the CPU or one of GPUs (5). (5). The other frames will be interpolated. The higher the value, the more accurate tracking, the higher the CPU load.
  7. If you don't need to detect static objects, select Yes in the Hide stationary objects field (7). This parameter lowers the false alarm rate when detecting moving objects.

  8. Specify the Minimum number of detection triggers for the neural tracker to display the object's trajectory (8). The higher the value, the more is the time interval between the object's detection and display of its trajectory on screen.  Low values may lead to false triggering Note
    titleAttention!
    We recommend the GPU.
  9. Select the neural network file (69).

    Note
    titleAttention!

    A trained neural network does a great job for a particular scene if you want to detect only objects of a certain type (e.g. person, cyclist, motorcyclist, etc.). 

    To train your neural network, contact AxxonSoft (see Requirements to data collection for neural network training).


    Info
    titleNote

    For correct neural network operation under Linux, place the corresponding file in the /opt/AxxonSoft/AxxonNext/ directory. 


  10. Set

    Select the

    frame rate value

    processor for the neural network

    (7). The other frames will be interpolated. The higher the value, the more accurate tracking, the higher the CPU load.Specify the Minimum number of detection triggers for the neural tracker to display the object's trajectory (8). The higher the value, the more is the time interval between the object's detection and display of its trajectory on screen.  Low values may lead to false triggering

    : the CPU or one of GPUs (13). 

    Note
    titleAttention!

    We recommend the GPU.


  11. You can use the neural filter to sort out video recordings featuring selected objects and their trajectories. For example, the neural tracker detects all freight trucks, and the neural filter sorts out only video recordings that contain trucks with cargo door open. To set up a neural filter, do the following:

    1. To use the neural filter, set Yes in the corresponding field (910).

    2. In the Neurolfilter mode field, select a processor to be used for neural network computations (1011).

    3. In the Path to neurofilter file field, select a neural network file (11).

    Specify the Minimum trigger count for the neural tracker to display the object's trajectory (
    1. 12)

    . The higher the value, the more is the time interval between the object's detection and display of its trajectory on screen
    1. .

      Low values may lead to false triggering.
  12. By default, the entire FoV is a detection zone. If you need to narrow down the area to be analyzed, you can set one or several detection zones.

    Info
    titleNote

    The procedure of setting zones is identical to the primary tracker's (see Setting General Zones for Scene Analytics). The only difference is that the neural tracker's zones are processed while the primary tracker's are ignored.


  13. Click Apply.
  14. The next step is to create and configure the necessary detection tools. The configuration procedure is the same as for the primary tracker.

    Note
    titleAttention!

    To trigger a Motion in Area detection tool under a neural network tracker, an object must be displaced by at least 25% of its width or height in FoV.


    Note
    titleAttention!

    The abandoned objects detection tool works only with the primary tracker.