You are here

The Tricky Thing about Free-form Gestures

The tricky thing about free-form gestures… adding a few obvious gestures to an otherwise touch interface is easy. However, if you begin adding more gestures, the usability design challenges increase exponentially.

For a long time free-form gesture interaction has been just over the horizon for consumer products, except for gaming of course. Now its emergence on laptops, tablets and phones has begun. At ICS, we’ve been working on a few prototypes for free-form gesture interfaces, experimenting with how to add viable gesture interactions to kiosks and interactive signboards.

Definitions: Gestural interfaces are defined as either touchscreen or free-form. Touch user interfaces (TUIs) require that the user touch the device directly. This limits the types of gestures that can be used to control the device to 2-D actions within the confines of the screen. Free-form gestural interfaces don’t require the user to touch or handle the screen or any device directly, although some technologies require a hand held controller or glove be used.

Although there are numerous technical challenges when using free-form gesture interaction, two primary usability design challenges standout...

Discoverability – The good and bad thing about gesture-enabled interfaces is the lack of visual interface controls (scroll bars, links, handles, paging). Touch interfaces have taken the notion of direct manipulation to another level, allowing the user to employ fingers to manipulate on-screen objects rather than clicking on virtual controls. You can swipe through pages instead of click on a “next” button or stretch an image to make it larger rather than click a “zoom” button. The reduction of visual controls can be blamed on the necessity to save space and remove clutter on small, mobile screens. In the end, the more pure, direct manipulation of touchscreens is proven to have an aesthetic appeal. However, it does require some initial training on the part of the user. The interactions available to the user are not as “discoverable” as selectable visual controls. We can assume that free-form gesture interfaces will continue on the design trajectory set by TUIs. After all, the goal is to achieve increasingly more natural (i.e., modeled on physical) interaction.

Margin of Error Acceptability – A mouse click is a very simple, concrete event. If your mouse if recognized by your system, it can be expected that clicking the mouse is a 100% recognized event. Position of the cursor is the only relevant parameter that the user must grapple with. Touching a touchscreen can be finicky. You have to use the right touch. It must be not only a certain position, but also a certain speed, duration, direction and length to be viable. The “right” touch is different on different devices. At least on touch devices it is only a 2D spatial problem. Take that into 3D space for free-form gestures. Just testing a free-form gesture interface on users for a short period of time will convince you that people do not learn and imitate gestures accurately or with much ease. If a system can allow a wide range of “error” on a gesture, it will be consistently recognized. This works fine if there are only a few, fairly different gestures. If you want to have a large number of gestures, then the margin of error on any one gesture must be much less or there will be no way to distinguish one gesture from another.

In a prototype at ICS, we’ve addressed the discoverability issue by creating a short interactive tutorial of the available gestures. It begins when the system initially recognizes the person. It’s modeled very loosely on Kinect and Wii tutorials. To address the margin of error issue, we restricted our prototype to three simple, distinctly different gestures that are similar to familiar touch gestures. They are 1.) swipe: moving either arm at waist level across the body, 2.) fast scroll: holding either arm out at waist level or 3.) zoom: holding the hands up at shoulder level and moving them out or in (zoom in or out). We believe we have created a free-form gesture UX that is quite easy and pleasurable to use.

About the author

Dorothy Shamonsky, Ph.D., is the Director of UI/UX R&D for ICS. Her current focus is Natural User Interfaces (NUIs) and achieving simple, compelling user experiences for devices and services that are part of the Internet of Things.