Face recognition on the Orange Pi with OpenCV and Python

Install OpenCV on the Orange Pi

In this project I will show you how to capture images from a webcam, detect faces in those images, train a face recognition model and then try it out on video stream from a webcam. The code here can be the basis for many other projects that contain any element of personal authentication.

For hardware, you will need a webcam and an Orange Pi of course. I used the Orange Pi Plus 2E. I am pretty sure the instructions below will work on a Raspberry Pi, but I haven’t tested it. As far as software goes, we will use OpenCV, which is a real time computer vision and machine learning library. It will allow us to capture images from the webcam, manipulate them and apply face recognition models.

So follow the steps from the OpenCV installation tutorial and install a compiler and necessary libraries.

When it starts it should look like this:
This step took around 1 hour on my device. If you get any error from the extra modules, just remove them. For this to work, you only need the face module. Finally install OpenCV:

sudo make install

Now you have OpenCV in Python on the Orange Pi with the face recognition libraries. You can test it by launching Python and importing cv2.

Capture live stream from the webcam and apply face recognition

First we will write a Python script to read and store images for the model, the faces of the people we want to recognize. In the second step we will put the model to the test and see if it correctly recognizes the right person.

To read webcam stream and store faces corresponding to a person, use the code below. Make sure you have a XML face detection cascade file. These XML cascade files are usually located in opencv/data/haarcascades/. Either copy the required file in the same folder as the script, or put the whole pathe in the cv2.CascadeClassifier() function.

Save the code above in a Python script. I called it “input.py”. Connect you webcam to the Orange Pi and start the script. Type in your name and look at the camera, pressing space each time a rectangle shows around your face. You have to take 10 pictures, which will be saved under the faces/ folder.

To test the model save the code below in another script, let’s call it “output.py”. The code creates a list of faces (image objects) and labels (integers), which is what the training model requires. This is done by going through the folders and files in the faces/ directory, created by the previous script. Then we train the model and predict directly on the video stream, adding a label on top of each recognized face.

Then you have to add the modules folder in open_contrib to the OPENCV_EXTRA_MODULES_PATH flag. For example, if you downloaded the extra modules in /home/orangepi/, then you put OPENCV_EXTRA_MODULES_PATH=/home/orangepi/opencv_contrib/modules