Fast vs. Intuitive: a challenge in interface design and why the iPad is only the first step

Computer interface designers of the world, I lay a quandary at your feet: in the same interface, is it possible to provide newcomers with intuitive operation while not hindering experienced users’ desire for speed and rapid productivity? It seems like most commonly, with existing computing interfaces, we find the following at play:

I’d argue that today these two continua seem linked. Intuitive devices — devices targeting the general populace — and are likely optimized for those with fewer existing (computing) mental models. Those wanting extremely fast usage are likely the power users, having more robust mental models built from years of existing computing experience. So the big dilemma is:is it possible to have a computing interface that is fast AND intuitive for ALL users while also not requiring a great many (computing) mental models?

The above musing stems from a conversation I had with a colleague about whether the iPad, with its elegant touch screen and simplified operating system, was truly that much more helpful for newcomers than a well-explained PC. I concede that the iPad is somewhat more helpful — its integrated screen that also acts as (or replaces) a keyboard and mouse is a great first step — but I don’t think it’s as much of an ultimate game-changer as we’d like to think. Why? Once you get beyond the novelty of the screen, users still need to understand tap, touch, and swipe conventions. They must mentally internalize what these mean and why/when to employ them. Sure, the touch and gestural conventions are better than mouse conventions in many cases, but I’d argue they’re hardly intuitive for a complete newcomer. (Watch the video of a news reporter showing an iPad to residents of an assisted living facility and note that the reporter is actually the one operating the device. If the device was so intuitive, the seniors themselves would have picked it up and begun computing.)

For the sake of discussion, ponder are a few examples of activities that fall at different ends of the spectrum depending on how an interface is optimized. How do you move something in a computer? With mouse, it’s drag and drop. (In The Ultimate PC Primer, I explain the drag and drop computing metaphor by telling a real-world story and providing a real-world illustration.) In the case of moving scroll bars or relocating files, it’s literally “click and drag.” Now consider the touch-interface. There is no “click,” so what’s the substitute? “Touch and swipe” seems to be the convention we’ve been given. But is touching and swiping any more intuitive than click and drag? Or it is merely faster? I’d argue that, with nothing more than real physical world experience, a newcomer isn’t going to understand touch and swipe any better than click and drag. It’s a convention which seems optimized for those with existing mental models of the virtual environment of a computer.

What might be more organic? How about pinching an on-screen item and sliding/dragging it? Of course, without an interface/screen that can simulate a sense of depth or texture (haptic technology), this might seem a little far-fetched. But isn’t one of the chief difficulties of explaining computing to a newcomer that there is no sense of direct contact with the virtual environment? With the ability to feel something moving underneath your fingers, less mental model is required to grasp that the gesture is actually accomplishing what the user desires — to move something from point A to B. In short, it would be more intuitive.

But now, to play counterpoint, would that be faster? Probably not. Consider another example: opening, or navigating deeper within, a file folder. Touch and swipe could make this very fast to achieve, but would a newcomer understand that “opening” the virtual folder he/she sees on the screen is accomplished by such a gesture? A more “real-world” motion might be to pinch the thumb and forefinger together to pry open the folder. (Again, with haptics, this might be feasible.) But how many existing users would want to have to pinch open all their folders? Honestly, I wouldn’t. So to get back to the diagram, there can often be a faster interface method — a shortcut —but usually at the expense of existing knowledge — knowledge that has nothing to do with the physical real-world counterpart. It’s an interface convention invented purely for “computing.”

And that’s the dilemma. I think the great challenge here is that we’re still physical beings that reside in a physical world. Unless the sense of physical elements can be brought to the interface, we’re left with needing metaphorical mental models to relate to the virtual space inside the computer. Will that ever change? I suppose that depends on what computer interface designers continue to develop for us “normal people” to use. Scott Berkun thinks The Future of UI will be boring and even notes that “the core metaphors computer interfaces are based on haven’t changed much in 300 years.” (I’ve previously noted something similar in my arguments here on Explain Technology.)

In light of that, and getting back to the iPad portion of this post, a deeper question I might pose to designers is: what is the objective of touch-device interface design? Is it, like the mouse-based GUIs, to recreate familiar real-world elements in electronic form through visual icons and metaphors but sans mouse? Or is it to make something beyond…. something more new, something even faster for those who want a way to access information faster than the real-world equivalent? For the latter, there are ways of streamlining actions and access, but usually only if certain interface methods and conventions are pre-understood by the user — mental models. So in the latter case, the new interaction methods may be far from intuitive for new users — what Ted Nelson calls “retroactively obvious.”

Which is better? Where’s the balance? And most importantly, who (and with what technology) will take us the next step toward a truly intuitive computer interface? The iPad took the first step. What do you think is the next near-term step?