README.md

Intruder Detector

What it Does

This reference implementation detect the objects in a designated area. It gives the number of objects in the frame, total count and also record the alerts of the objects present in the frame. The application is capable of processing the inputs from multiple cameras and video files.

Requirements

Hardware

Software

Ubuntu* 16.04 LTSNote: Use kernel versions 4.14+ with this software.
Determine the kernel version with the uname command.

uname -a

OpenCL™ Runtime Package

Intel® Distribution of OpenVINO™ toolkit 2019 R2 Release

How it Works

The application uses the Inference Engine included in the Intel® Distribution of OpenVINO™ toolkit. A trained neural network detects objects within a designated area by displaying a green bounding box over them, and registers them in a logging system.

Install OpenVINO

You will need the OpenCL™ Runtime Package if you plan to run inference on the GPU as shown by the instructions below. It is not mandatory for CPU inference.

Other dependencies

FFmpeg*
FFmpeg is a free and open-source project capable of recording, converting and streaming digital audio and video in various formats. It can be used to do most of our multimedia tasks quickly and easily say, audio compression, audio/video format conversion, extract images from a video and a lot more.

Which Models to Use

This application uses the person-vehicle-bike-detection-crossroad-0078 Intel® model, that can be downloaded using the model downloader. The model downloader downloads the .xml and .bin files that is used by the application.

The application also works with any object-detection model, provided it has the same input and output format of the SSD model.
The model can be any object detection model:

Downloaded using the model downloader, provided by Intel® Distribution of OpenVINO™ toolkit.

Built by the user.

To download the models and install the dependencies of the application, run the below command in the intruder-detector-cpp directory:

./setup.sh

The labels file

In order to work, this application requires a labels file associated with the model being used for detection.
All detection models work with integer labels and not string labels (e.g. for the person-vehicle-bike-detection-crossroad-0078 model, the number 1 represents the class "person"), that is why each model must have a labels file, which associates an integer (the label the algorithm detects) with a string (denoting the human-readable label).

The labels file is a text file containing all the classes/labels that the model can recognize, in the order that it was trained to recognize them (one class per line).
For the person-vehicle-bike-detection-crossroad-0078 model, we provide the class file labels.txt in the resources folder.

The config file

The resources/config.json contains the path to the videos that will be used by the application and the labels to be detected on those videos. All labels defined will be detected on all videos.
The config.json file is of the form video: ["<path/to/video>"] and label: ["<labels>"]. The labels used in the config.json file must coincide with the labels from the labels file.

The application can use any number of videos for detection, but the more videos the application uses in parallel, the more the frame rate of each video scales down. This can be solved by adding more computation power to the machine the application is running on.

What input video to use

The application works with any input video.
Sample videos for object detection are provided here.

Setup the Environment

Configure the environment to use the Intel® Distribution of OpenVINO™ toolkit by exporting environment variables:

source /opt/intel/openvino/bin/setupvars.sh

Build the Application

To build, go to intruder-detector-cpp directory and run the following commands:

mkdir -p build && cd build
cmake ..
make

Run the application

If not in build folder, go there by using:

cd <path-to-intruder-detector-cpp>/build/

To see a list of the various options:

./intruder-detector -h

A user can specify what target device to run on by using the device command-line argument -d followed by one of the values CPU, GPU, MYRIAD or HDDL. To run with multiple devices use -d MULTI:device1,device2. For example: -d MULTI:CPU,GPU,MYRIAD

Running on the CPU

Although the application runs on the CPU by default, this can also be explicitly specified through the -d CPU command-line argument:

Note: The HDDL-R can only run on FP16 models. The model that is passed to the application, through the -m <path_to_model> command-line argument, must be of data type FP16.

Loop the input video

By default, the application reads the input videos only once, and ends when the videos end.
In order to not have the sample videos end, thereby ending the application, the option to continuously loop the videos is provided.
This is done by running the application with the -lp true command-line argument: