iOS7 Day-by-Day :: Day 18 :: Detecting Facial Features with CoreImage

Written by Sam Davies

This post is part of a daily series of posts introducing the most exciting new parts of iOS7 for developers –#iOS7DayByDay. To see the posts you’ve missed check out the introduction page, but have a read through the rest of this post first!

Introduction

Face detection has been present in iOS since iOS 5, in both AVFoundation and CoreImage. In iOS7, the face detection in CoreImage has been enhanced, to include feature detection – including looking for smiles and blinking eyes. The API is nice and easy to use, so we’re going to create an app which uses the face detection in AVFoundation to determine when to take a photo, and then will let the user know whether or not it is a good photo by using CoreImage to search for smiles and closed eyes.

Face detection with AVFoundation

Day 16’s post was about using AVFoundation to detect and decode QR codes, via the AVCaptureMetadataOutput class. The face detector is used in the same way – faces are just metadata objects in the same way that a QR code was. We’ll create aAVCaptureMetadataOutput object in the same manner, but with a different metadata type:

AVCaptureMetadataOutput *output = [[AVCaptureMetadataOutput alloc] init];
// Have to add the output before setting metadata types
[_session addOutput:output];
// We're only interested in faces
[output setMetadataObjectTypes:@[AVMetadataObjectTypeFace]];
// This VC is the delegate. Please call us on the main queue
[output setMetadataObjectsDelegate:self queue:dispatch_get_main_queue()];

This is fairly similar to what we did with QR codes, only now we have added a new output type to the session –AVCaptureStillImageOutput. This allows us to take a photo of the input at a given moment – which is exactly whatcaptureStillImageAsynchronouslyFromConnection:completionHandler: does. So, when we are notified that AVFoundation has detected a face, we take a still image of the current input, and stop the session.

We create a JPEG representation of the captured image with the following:

Now we pop this into an UIImageView, and create a CIImage version as well, in preparation for the CoreImage facial feature detection. We’ll take a look at this imageContainsSmiles:callback: method next.

In order to get the detector to perform smile and blink detection we have to specify as such in the detector options (CIDetectorEyeBlink and CIDetectorSmile). The CoreImage face detector is orientation specific, and therefore we’re also setting the detector orientation here to match the orientation in which the app has been designed.

Now we can loop through the features array (which contains CIFaceFeature objects) and interrogate each one to find out whether it contains a smile or blinking eyes:

If you run the app up you can see how good the CoreImage facial feature detector is:

In addition to these properties, it’s also possible to find the positions of the different facial features, such as the eyes and the mouth.

Conclusion

Although not a ground-breaking addition to the API, this advance in the CoreImage facial detector adds a nice ability to interrogate your facial images. It could make a nice addition to a photography app – helping users take all the ‘selfies’ they need.