Main menu

Uniform vs. Custom UI: Why Consistent Design Doesn’t Matter

The debate over UI standards is as old as the standards themselves: should developers build custom controls and a custom look & feel, or stick to human interface guidelines? The Web accelerated that debate, as developers brought Web interactions into their desktop apps and vice versa; more recently, Apple’s App Store and its own mixing of iOS and Mac standards has further invigorated it.

Let’s get one thing out of the way: creating a great standard experience is a hell of a lot easier than creating a great custom one. Even some of the best custom apps (e.g. Twitter for Mac) fail to handle some key interactions (e.g. distinguishing between an active and an inactive window). Your mockup may look splendid in Photoshop but in sidestepping your platform’s own UI toolkit you’ve assumed the responsibility for all sorts of details (e.g. accessibility). In other words, don’t go down the custom route unless you’re willing to put a lot of effort into making design a differentiator for your product (as Twitter has clearly done).

Anyone who’s worked with me knows I enjoy designing custom controls — widgets tailored to the task at hand. Generally these tasks could be accomplished via some combination of standard UI elements, and the argument against them is often about consistency. In “User Interface Conservatism versus Liberalism,” Adam Engst writes,

…the real problem with UI liberalism is that it reduces the usability of the platform as a whole…The more you use applications in concert—and many of us spend our entire days at our Macs—the more you benefit from the consistent user interfaces designed by UI conservatives. And when applications rely on consistent user interfaces, they become easier to learn as well, which translates directly to the bottom line when we’re talking about productivity applications.

The problem with thinking in terms of consistency is that those thoughts focus purely on the design and the user can get lost. “Is what I’m designing consistent with other things we’ve designed (or others have designed)?” is the wrong question to ask.

Instead, the right question is, “Will the user’s current knowledge help them understand how to use what I’m designing?” Current knowledge is the knowledge the user has when they approach the design. It’s the sum of all their previous experiences with relevant products and designs.

As an example, consider Apple scrollbars:

Apple scrollbars on the Mac

iTunes and iPhoto’s custom scrollbars are visually inconsistent with the standard Mac one. Yet I doubt this creates any usability problems because they retain the same layout and a set of core visual cues. They all rely on the same current knowledge.

Easy example, you say: that’s just visual design. What about differing interactions? Encountering a Mac scrollbar for the first time, a Windows user might be confused because the up arrow is at the bottom. Here, the visual design serves as as guide: that up arrow is nearly identical to the one found on Windows. So the user’s current knowledge of Windows allows her to find it after a moment’s hesitation.

But that’s where things get nuanced. That hesitation is fine if she uses a Mac occasionally. But if she’s constantly switching back and forth she’s also repeatedly re-training herself, constantly incurring that cognitive load. And however inconvenient that might be, at least she can recognize two different contexts; it would be far worse for a Windows app to use Mac-like scrollbars. Here, consistent placement matters because it’s how we achieve consistent expectations.

It gets worse: suppose your app uses custom code to generate a snazzy scrollbar. You put both arrows at the end since that’s the Mac standard. That Windows user changes her Mac’s system preference so the scroll arrow is at the top. Every scrollbar except your app’s changes to obey the new setting. Yet another example of why custom UI is difficult.

Today, nearly every mouse has a scroll wheel, and many trackpads support a two-finger scroll gesture. I question whether most people use the scroll arrows at all. Twitter agrees:

Twitter for Mac's scrollbar

So that’s bad, right? Well, not necessarily. If most people don’t use scroll arrows few will miss them (or even notice they’re gone); and given Twitter’s more tech-savvy audience that number is even smaller. For those who do, a complete lack of arrows is probably better than non-standard ones since it’s a clearer difference. The visual design helps too: the scroll thumb retains is distinctive shape to prompt user expectations, but its appearance is notably different from any of Apple’s, providing a cue that this is a different sort of scrollbar as the user switches between apps. The lack of scroll track enhances that further. It’s a risk, but one with justification.

Design is hard. The more of it you take on, the harder it becomes. There’s nothing wrong with custom UI when it’s done well — that is, when you design for current knowledge — but that takes time, effort, and probably testing. Apple, Microsoft, and a number of excellent Web UI frameworks have done that work for us, allowing us to create superb experiences without worrying about the details of how a dropdown works. For many developers that will be good enough; better to focus on solving new problems, perhaps with the occasional custom control when a novel task demands it. If you do create a fully custom UI, go in with your eyes open: get a phenomenal, detail-oriented designer; accept that the UI will require significant effort; and leave time for usability testing. Recognize the trade-off: you’ll get a distinctive brand and carefully-crafted emotional impact, but you’ll invest a lot of resources to get there.

Today's and tommorrow's user interfaces all have one thing in common: text definitions
and a user interface framework that can make use of those definitions.
At least gtk, qt and android afford xml user interface definitions.
These definitions are modifiable outside the scope of the traditional software upgrade cycle.
That is to say that they can be changed, and thus the user interface changed, without an application re-compile cycle.
What this means for perhaps designers yet more pertinently users is that the user interface's design and user preferences can be more readily applied across multiple applications and platforms at more frequent intervals and in accordance to the user's decisions.
This later point can not be stressed enough since it alleviates completely the cognitive load, not to mention irritations, that some designer's "best thing since slice bread ui design(s)" place upon others.
That any of us have had to put up with this at all is simply a reflection of a large part of the computer's originating environment's "gee I'm so clever and smart" mentality aka egotism.
Thankfully some people, including myself, sought out to design user interface frameworks and systems that eliminate from our systems the presence of this 'mentality' along with it's manifestations and ensuent encumbrances.
Alas the work, in my case at least, has yet to be completed and so we have, in 2012, this article.
:)
Steve Paesani

Today's and tommorrow's user interfaces all have one thing in common: text definitions
and a user interface framework that can make use of those definitions.
At least gtk, qt and android afford xml user interface definitions.
These definitions are modifiable outside the scope of the traditional software upgrade cycle.
That is to say that they can be changed, and thus the user interface changed, without an application re-compile cycle.
What this means for perhaps designers yet more pertinently users is that the user interface's design and user preferences can be more readily applied across multiple applications and platforms at more frequent intervals and in accordance to the user's decisions.
This later point can not be stressed enough since it alleviates completely the cognitive load, not to mention irritations, that some designer's "best thing since slice bread ui design(s)" place upon others.
That any of us have had to put up with this at all is simply a reflection of a large part of the computer's originating environment's "gee I'm so clever and smart" mentality aka egotism.
Thankfully some people, including myself, sought out to design user interface frameworks and systems that eliminate from our systems the presence of this 'mentality' along with it's manifestations and ensuent encumbrances.
Alas the work, in my case at least, has yet to be completed and so we have, in 2012, this article.
:)
Steve Paesani

Trackbacks

How important is design (visual & interaction) consistency for a web app?…

I agree completely with Jared. Without realizing it we often use “consistency” as a proxy for user expectations; but in doing so we lose important nuances. In addressing expectations rather than consistency we can often both meet user expectations an…