// we'll iterate through every detected face. CIFaceFeature provides us// with the width for the entire face, and the coordinates of each eye// and the mouth if detected. Also provided are BOOL's for the eye's and// mouth so we can check if they already exist.for(CIFaceFeature* faceFeature in features){

b) Create a red border around each face found in the image using the feature bounds. We’ll also store the face width which we’ll be using for drawing on the other features of the face.

// get the width of the face
CGFloat faceWidth = faceFeature.bounds.size.width;

// create a UIView using the bounds of the face
UIView* faceView =[[UIView alloc] initWithFrame:faceFeature.bounds];

// add the new view to create a box around the face[self.window addSubview:faceView];

Now over the two eyes we’ll draw green circles.

if(faceFeature.hasLeftEyePosition){// create a UIView with a size based on the width of the face
UIView* leftEyeView =[[UIView alloc] initWithFrame:CGRectMake(faceFeature.leftEyePosition.x-faceWidth*0.15, faceFeature.leftEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];// change the background color of the eye view[leftEyeView setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];// set the position of the leftEyeView based on the face[leftEyeView setCenter:faceFeature.leftEyePosition];// round the corners
leftEyeView.layer.cornerRadius = faceWidth*0.15;// add the view to the window[self.window addSubview:leftEyeView];}

if(faceFeature.hasRightEyePosition){// create a UIView with a size based on the width of the face
UIView* leftEye =[[UIView alloc] initWithFrame:CGRectMake(faceFeature.rightEyePosition.x-faceWidth*0.15, faceFeature.rightEyePosition.y-faceWidth*0.15, faceWidth*0.3, faceWidth*0.3)];// change the background color of the eye view[leftEye setBackgroundColor:[[UIColor blueColor] colorWithAlphaComponent:0.3]];// set the position of the rightEyeView based on the face[leftEye setCenter:faceFeature.rightEyePosition];// round the corners
leftEye.layer.cornerRadius = faceWidth*0.15;// add the new view to the window[self.window addSubview:leftEye];}

c) Finally we’ll draw a circle over the mouth.

if(faceFeature.hasMouthPosition){// create a UIView with a size based on the width of the face
UIView* mouth =[[UIView alloc] initWithFrame:CGRectMake(faceFeature.mouthPosition.x-faceWidth*0.2, faceFeature.mouthPosition.y-faceWidth*0.2, faceWidth*0.4, faceWidth*0.4)];// change the background color for the mouth to green[mouth setBackgroundColor:[[UIColor greenColor] colorWithAlphaComponent:0.3]];// set the position of the mouthView based on the face[mouth setCenter:faceFeature.mouthPosition];// round the corners
mouth.layer.cornerRadius = faceWidth*0.2;// add the new view to the window[self.window addSubview:mouth];}}}

5) Adjust For The Coordinate System

If you were to run the app now you might notice that the y-locations of the circles drawn over the eyes and mouth are off, this is because of the different coordinate system used by Core Image (and the default on Mac OS X).

Flip the image, and then flip the entire window containing our newly created circles to make everything right side up. Doing things this way only requires a couple of lines of code which we’ll add into the facedetector method.

// flip image on y-axis to match coordinate system used by core image[image setTransform:CGAffineTransformMakeScale(1, -1)];

// flip the entire window to make everything right side up[self.window setTransform:CGAffineTransformMakeScale(1, -1)];

Conclusion

Finally add the from the following code application: didFinishLaunchingWIthOptions: method before the return statement to run the face detector.

[self faceDetector];

That’s all there is to it! Thanks to Tom of b2cloud who’s tutorial on face detection I found after starting this one who’s code I used to simply this example. Also thanks to Tobyotter on Flickr for the monster face image.

One more thing…

Face detection can take awhile, especially on older devices so you may want to run your face detection method on the background. You can simply change:

and the face detection and drawing will run in a separate thread and the app will start up faster (some advice I picked up in the extensive Core Image section of the iOS 5 by Tutorials book (aff)). Even on a newer device I can see the difference.

That’s all there is to it! Please post any issues in the comments below.