The iOS autocomplete – UX Planet

If you use iOS, you are probably familiar with this keyboard layout. Note the autocomplete words at the top.

What you probably already know, but potentially might not have really thought about, is the fact that, when you have typed enough letters that autocomplete reaches a certain level of confidence in what you are typing (rightly or wrongly), the word it is guessing is highlighted in white, and the text becomes blue. We’ll call it the “active autocomplete” vs the passive one. With the active autocomplete, even hitting the space key will fill in the word that is highlighted.

Unfortunately, the above design is not good. There is not much to suggest that “snakeskin” is actually highlighted rather than simply part of an alternating color scheme. Truth be told, I didn’t even notice this highlighting until having used an iPhone for quite some time. More importantly, even if you are intellectually aware of this feature, the message it is trying to convey will still be drowned out by the task of actually typing on the phone.

Your eyes are already focused on the keyboard while you are typing, especially because the touchscreen keyboard provides no tactile indicator of where your fingers are. Looking at the autocomplete words requires diverting your attention from the keyboard to the row above the keyboard. Based on my own cursory research, you use the keyboard in two modes: typing and autocompleting.

When your brain deems it faster to just plug in the words into the keyboard and let autocorrect sort out the rest, you are typing.

When your brain knows that you are entering a predictable set of words, or a long and cumbersome word, your attention (and fingers) will move up to the suggested words, in which case you are autocompleting.

In order to switch between the two modes, your brain needs a cue. The cue to switch from typing to autocompleting is usually a learned intuition that, at that particular moment, autocomplete will be more efficient. The cue to switch back to typing is once autocomplete starts throwing up stupid suggestions.

Your brain would far prefer not to constantly ask the question “should I be switching modes?” because it is cognitively expensive for the brain to break pattern and do something different. The aforementioned stupid suggestions further factor into the brain’s reluctance to shift your glance centimeters upward, knowing that it will have wasted cycles and milliseconds to discover that autocomplete has suggested “my boss told me to Saginaw”. In short, the brain knows the cost of lost cycles from switching to autocomplete without result is greater than the savings of time to be had if autocomplete guessed correctly.

Because of the potential for great time and cognitive savings from autocompleting a long string of words, your very efficient (lazy) brain is willing to gamble its attention on the mode switch to autocomplete. The same does not necessarily apply to a single word. If you are typing a word like, say, “autocorrect”, you might not want to bother typing every character out, since it’s 11 letters. On the other hand, the time saved from autocompleting the last 6 characters might not be worth the cognitive risk that autocomplete didn’t guess correctly, at least not to your brain.

To save your brain the trouble of making this calculation, the interface needs to send out a signal to your peripheral vision, alerting it to the fact that, yes, the algorithm has your suggestion ready to go and you can make the switch of attention without risk. In order to do that, the signal needs to be strong enough to grab your attention. The iOS active autocomplete design shown above is not strong enough.

If you don’t believe me, just consider two things.

The first is the virtual keyboard on the BlackBerry Z10 (notable for not having a physical keyboard). Note how the autocomplete words are actually placed on the keyboard itself (selected via upward swipes), preventing the need for two separate attention modes. I can confirm from having owned one that this system worked brilliantly. Part of me wants Apple to simply adopt this system.

The second is the fact that Apple themselves have come to a similar conclusion, and ditched the old design in favor of a less ambiguous one:

The new design creates a clearer emphasis around the central word, leaving no doubt that it is different from the other two. This may be due to the Gestalt principle of Figure and Ground. Whatever the reason, it is not the mess that the old active autosuggest design was.

The next question is, is it good enough?

My hypothesis is that it is not good enough. While the new design may be better at communicating the intellectual fact that a word has been selected for active autocomplete, it is still too subtle to grab someone whose attention is focused on the typing. In fact, while it may be less ambiguous to conscious attention, it is actually visually less prominent a lower contrast background, and text that stays the same color. It seems that Apple understands the problem on some level, but failed to actually solve it.

This is the solution:

This design, which is my own, grabs the user’s attention by not only using a bright yellow, but also pops the selected word outwards, or up on the Z-axis towards the user. People instinctively respond to objects jumping out at them. At the same time, the word moves upwards on the Y-axis as well, away from the keyboard. This allows it to keep from being too distracting, as well as signifies that the word is being actively promoted by the system. An additional bonus is that the upward Y shift suggests that the user can dismiss this suggestion by dragging the word downwards, back toward the keyboard.

It is here that I have hopefully demonstrated the woeful consequences of Flat design. While Apple’s 2013 interpretation of Flat was particularly awful for its childish appearance, the real problem was not the silliness, but the loss of visual affordances. Flat buttons don’t look like buttons. Layers don’t look like layers. Humans are evolved to understand a three-dimensional world, and make sense of things in three-dimensions. By stripping your virtual interface of that third dimension, Flat deprives the human user of cognitive signals they can use to understand your software. This is one more example of Apple being driven more by style than by user-friendliness.

Nonetheless, the point of this article was not Apple bashing, as I do plenty of that in other articles (and will do plenty more in the future). Rather, I want to emphasize the importance of understanding basic human cognition in designing user interfaces. It is a hell of a lot more important than these goofy-ass personas that everyone seems so obsessed with. Human cognition is a universal, and designing around it will ensure a timeless interface that resonates at a deep level for reasons most users won’t even be able to describe.