Documentation for DetectorPack PSIM 2.0.

Previous page Next page

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 6 Current »

To configure the Stopped object detector program module, use the settings panel of the Stopped object detector object that is created on the basis of the Camera object on the Hardware tab of the System settings window.

To configure the stopped object detector, do the following:

  1. Go to the settings panel of the Stopped object detector object.

  2. By default, the standard neural network according to the selected object type and device is initialized. You must not select manually standard networks for different processor types since it is performed automatically. If you have the unique neural network for use, click the  button to the right of the Tracking model field and specify its file in the standard Windows Explorer window that opens.

    Attention!

    To train the neural network, contact AxxonSoft technical support (see Data collection requirements for neural network training). The use of the trained neural network for a particular scene allows you to detect only objects of a certain type (for example, a person, a cyclist, a motorcyclist, and so on).

  3. From the Device drop-down list, select the one on which the neural network will operate: the CPU, one of the NVIDIA GPUs, or one of the Intel GPUs. Auto (default value)—device is selected automatically: The NVIDIA GPU takes the highest priority, then goes the Intel GPU, and then the CPU. Neural networks are selected by default depending on the selected device.

    Attention!

    • It can take several minutes to launch the algorithm on the Nvidia GPU after you apply the settings. You can use caching to speed up future launches (see Optimizing the operation of neural analytics on GPU).
    • If you select a processor other than the CPU, the selected device carries most of the computing load. However, the CPU is also used to run the detector.
  4. From the Object type drop-down list, select the object type:
    • Human—camera is pointed at the person at the angle of 100-160°;
    • Human (top-down view)—camera is pointed at the person from above at a slight angle;
    • People view from above (Nano)—camera is pointed at the person from above at a slight angle, small network size;
    • People view from above (Medium)camera is pointed at the person from above at a slight angle, average network size;
    • People view from above (Large)camera is pointed at the person from above at a slight angle, large network size;
    • Vehicle—camera is pointed at the vehicle at the angle of 100-160°;
    • Person and vehicle (Nano)—detects person and vehicle, small network size;
    • Person and vehicle (Medium)detects person and vehicle, average network size;
    • Person and vehicle (Large)detects person and vehicle, large network size.

      Note

      Neural networks are named taking into account the objects they detect. The names can include the size of the neural network (Nano, Medium, Large), which indicates the amount of consumed resources. The larger the neural network, the higher the accuracy of the object recognition but the greater the load on the CPU.

  5. From the Camera position drop-down list, select:
    1. Wall—objects are detected only if their lower part is in the area of interest that is set in the detector settings.
    2. Ceiling—objects are detected even if their lower part isn't in the area of interest that is set in the detector settings.
  6. In the Frames processed per second [0.016, 100] field, specify the number of frames per second that the detector processes in the range from 0.016 to 100. The default value is 2. For all other frames the interpolation is performedfinding intermediate values by the available discrete set of its known values. The greater the value of the parameter, the more accurate the tracking, but the higher the load on the processor.

    Attention!

    If the detector works incorrectly, we recommend selecting the value of the Frames processed per second [0.016, 100] parameter empirically.

    The number of frames with static objects must be at least 2. For frames with moving objects—at least 4.

    The greater the value, the higher the accuracy, but the greater the load on the selected processor for this operation. With the number of frames equal to 1, the accuracy is 70%.

    This parameter varies depending on the object movement speed. For solving typical tasks, the frame rate from 3 to 20 is enough. Examples:

    • for the moderately moving objects (without abrupt movements)—3;
    • for moving objects—12.
  7. In the Wait time (s) [1, 60] field, specify the waiting time for the reappearance of the disappeared stopped object in seconds in the range [1, 60].
  8. In the Stop time (s) [1, 60] field, specify the time in seconds after which the object is considered stopped in the range [1, 60].
  9. In the Recognition threshold [1, 100] field, specify the minimal detection threshold in percent in the range [1, 100]. If the object recognition probability is lower than the specified one, the data is ignored. The higher the value, the higher the detection quality, but some events from the detector may not be considered.
  10. In the Target classes field, if necessary, specify the class of the detected object. If you want to detect objects of several classes, specify them separated by a comma with a space. For example, 110.
    The numerical values of classes for the embedded neural networks: 1—Human/Human (top-down view), 10—Vehicle.

    Note

    1. If you leave the field blank, all available classes from the neural network are detected (Object typeNeural network file).
    2. If you specify a class/classes from the neural network, the objects of the specified class/classes are detected (Object typeNeural network file).
    3. If you specify a class/classes from the neural network and a class/classes missing from the neural network, the objects of a class/classes from the neural network are detected (Object typeNeural network file).
    4. If you specify a class/classes missing from the neural network, no objects are detected (Object typeNeural network file).

  11. Click the Settings button to determine the area of interest and the ignore area.
  12. You can sequentially set the anchor points of the area of interest (highlighted by the red color) where the stopped objects are ignored.
  13. You can sequentially set the anchor points of the ignore area (highlighted by the yellow color) where the stopped objects are ignored.

    Note

    Click the Stop video button to pause playback and capture a frame of the image. To run the video, click the Start video button.

    To specify the area of interest or the ignore area, click the corresponding button. The button is highlighted in blue.

    There can be only one area of interest and one ignore area.

    To remove an area, click the button.

    If you don't specify an area of interest, the entire frame is analyzed.

  14. Click the OK button to save the specified areas, close the Detection settings window, and return to the detector settings panel.
  15. Click the Apply button to save the changes.

Configuring the Stopped object detector program module is complete.


  • No labels