Design

Gestures and Touches in iOS6: More Recipes

By Erica Sadun, January 22, 2013

Use touch-based interfaces imaginatively to enhance interactivity of iOS apps

In this implementation, detection uses a multistep test. A time test checks that the stroke was not lingering  a circle gesture should be quickly drawn. There's an inflection test checking that the touch did not change direction too often. A proper circle includes four direction changes; this test allows for five. There's a convergence test: The circle must start and end close enough together that the points are somehow related. A fair amount of leeway is needed because when you don't provide direct visual feedback, users tend to undershoot or overshoot where they began. The pixel distance used here is generous, approximately a third of the view size.

The final test looks at movement around a central point. It adds up the arcs traveled, which should equal 360 degrees in a perfect circle. This example allows any movement that falls within 45 degrees for not-quite-finished circles and 180 degrees for circles that continue on a bit wider, allowing the finger to travel more naturally.

Upon these tests being passed, the algorithm produces a least-bounding rectangle and centers that rectangle on the geometric mean of the points from the original gesture. This result is assigned to the circle instance variable. It's not a perfect detection system (you can try to fool it when testing the sample code), but it's robust enough to provide reasonably good circle checks for many iOS applications.

Creating a Custom Gesture Recognizer

It takes little work to transform the code shown in Recipe 4 into a custom recognizer, as introduced in Recipe 5. Subclassing UIGestureRecognizer enables you to build your own circle recognizer that you can add to views in your applications.

Start by importing UIGestureRecognizerSubclass.h into your new class. The file declares everything you need your recognizer subclass to override or customize. For each method you override, make sure to call the original version of the method by calling the superclass method before invoking your new code.

Gestures fall into two types: continuous and discrete. The circle recognizer is discrete. It either recognizes a circle or fails. Continuous gestures include pinches and pans, where recognizers send updates throughout their lifecycle. Your recognizer generates updates by setting its state property.

Recognizers are basically state machines for fingertips. All recognizers start in the possible state (UIGestureRecognizerStatePossible), and then for continuous gestures pass through a series of changed states (UIGestureRecognizerStateChanged). Discrete recognizers either succeed in recognizing a gesture (UIGestureRecognizerStateRecognized) or fail (UIGestureRecognizerStateFailed), as demonstrated in Recipe 5. The recognizer sends actions to its target each time you update state, except when the state is set to possible or failed.

The rather long comments you see in Recipe 5 belong to Apple, courtesy of the subclass header file. I've included them here because they help explain the roles of the key methods that override their superclass. The reset method returns the recognizer back to its quiescent state, allowing it to prepare itself for its next recognition challenge.

The touches began (and so on) methods are called at similar points as their UIResponder analogs, enabling you to perform your tests at the same touch lifecycle points. (As an overriding philosophy, gesture recognizers should fail as soon as possible. When they succeed, you should store information about the gesture in local properties. The circle gesture should save any detected circle so users know where the gesture occurred.) The following example waits to check for success or failure until the touches ended callback, and uses the same testForCircle method defined in Recipe 4.

Dragging from a Scroll View

iOS's rich set of gesture recognizers doesn't always accomplish exactly what you're looking for. Here's an example: Imagine a horizontal scrolling view filled with image views, one next to another, so you can scroll left and right to see the entire collection. Now, imagine that you want to be able to drag items out of that view and add them to a space directly below the scrolling area. To do this, you need to recognize downward touches on those child views (that is, orthogonal to the scrolling direction).

This was the puzzle I encountered while trying to help developer Alex Hosgrove, who was trying to build an application roughly equivalent to a set of refrigerator magnet letters. Users could drag those letters down into a workspace and then play with and arrange the items they'd chosen. There were two challenges with this scenario. First, who owned each touch? Second, what happened after the downward touch was recognized?

Both the scroll view and its children own an interest in each touch. A downward gesture should generate new objects; a sideways gesture should pan the scroll view. Touches have to be shared to allow both the scroll view and its children to respond to user interactions. This problem can be solved using gesture delegates.

Gesture delegates allow you to add simultaneous recognition, so that two recognizers can operate at the same time. You add this behavior by declaring a protocol (UIGestureRecognizerDelegate) and adding a simple delegate method:

You cannot reassign gesture delegates for scroll views, so you must add this delegate override to the implementation for the scroll view's children.

The second question, converting a swipe into a drag, is addressed by thinking about the entire touch lifetime. Each touch that creates a new object starts as a directional drag but ends up as a pan once the new view is created. A pan recognizer works better here than a swipe recognizer, whose lifetime ends at the point of recognition.

To make this happen, Recipe 6 manually adds that directional-movement detection, outside of the built-in gesture detection. In the end, that working-outside-the-box approach provides a major coding win. That's because once the swipe has been detected, the underlying pan gesture recognizer continues to operate. This allows the user to keep moving the swiped object without having to raise his or her finger and retouch the object in question.

This implementation detects swipes that move down at least 16 vertical pixels without straying more than 8 pixels to either side. When this code detects a downward swipe, it adds a new DragView (the same class used earlier in this article) to the screen and allows it to follow the touch for the remainder of the pan gesture interaction.

At the point of recognition, the class marks itself as having handled the swipe (gestureWasHandled) and disables the scroll view for the duration of the panning event. This allows the child complete control over the ongoing pan gesture without the scroll view reacting to further touch movement.

Dr. Dobb's encourages readers to engage in spirited, healthy debate, including taking us to task.
However, Dr. Dobb's moderates all comments posted to our site, and reserves the right to modify or remove any content that it determines to be derogatory, offensive, inflammatory, vulgar, irrelevant/off-topic, racist or obvious marketing or spam. Dr. Dobb's further reserves the right to disable the profile of any commenter participating in said activities.

Video

This month's Dr. Dobb's Journal

This month,
Dr. Dobb's Journal is devoted to mobile programming. We introduce you to Apple's new Swift programming language, discuss the perils of being the third-most-popular mobile platform, revisit SQLite on Android
, and much more!