Topics

Topics

We first covered Core Image back in Bite #32. It's the full featured image processingframework that ships with iOS and OS X. Today we'll be taking a look at one of the neatest features of Core Image: Detectors.

Detectors allow us to ask the system if it can find any special features in an image. These features range from things likes faces, rectangles, and even text.

A detected feature of an image may also describe other metadata. For example a CIFaceFeature can report whether the face appears to be smiling, if one of the eyes is closed, and much more. Let's dive in.

We'll start by asking the user for a photo using UIImagePickerController (like we covered in Bite #83). Then we'll convert the image to a CIImage, and create our CIDetector. We'll configure it to be look for faces and use high accuracy. Then we'll ask it for the features (in this case faces) it can find in our image.

When asking for the features in our image, we make sure to pass the CIDetectorSmile option as true so Core Image will let us know who needs to turn their frown upside down. We'll access the properties of each detected face and use them to add some fun debug views:

Topics

Topics

CoreImage has long been a staple on OS X, and was added to iOS a few years ago. It's an incredibly feature-packed image processing API that can apply just about any type of filter or image manipulation you can dream of. Let's take a look at applying a simple color tint to an image:

CoreImage works on CIImages not UIImages, so we convert our image to a CIImage, and use guard since the CIImage property on UIImage is optional. CoreImage also doesn't use UIColor, so we create a CIColor instead.

Then the fun part, we create our filter by name. CoreImage has literally 100's of different filters available, and instead of subclasses, you instantiate them by name.