Tag Archives: user interaction

Creating good user experiences for apps inside messaging platforms poses a relatively new design challenge. When moving from desktop web to mobile interfaces, developers have had to rethink interaction design to work around a constrained screen size, a new set of input gestures and unreliable network connections.

Like our tiny touchscreens, messaging platforms also shake up the types of input that apps can accept, change designers’ canvas size, and demand a different set of assumptions about how users communicate.

Remember the days when hovering and clicking using the mouse were the most used trigger for interaction with site or app? Those days are gone. When Apple introduced the iPhone, multi-touch technology became mainstream and users learned that they could not only point and tap on the interface, but also pinch, spread, and swipe. Gestures are the new clicks.

The rise of touch and gesture-driven devices has dramatically changed the way we think about interaction. Gestures are more than merely entertaining, they are very useful and feel familiar. Today, the success of a mobile app significantly depends on how well gestures are implemented into the user experience. Even Adobe introduced a new design and wireframing app called Experience Design CC (Adobe XD) that lets you prototype on everything from simple wireframes to multi-screen experiences.

With the tools getting more user-friendly and affordable, virtual reality (VR) development is easier to get involved in than ever before. Our team at Clearbridge Mobile recently jumped on the opportunity to develop immersive VR content for the Samsung Gear VR, using Samsung’s 360 camera.

The result is ClearVR, a mobile application demo that enables users to explore the features, pricing, interiors and exteriors of listed vehicles. Developing this demo project gave us a better understanding of VR development for our future projects, including scaling, stereoscopic display and motion-tracking practices. This article is an introductory guide to developing for VR, with the lessons we learned along the way.

As UX professionals, we play a key role in raising the bar for customer experiences. A simple attention to detail is often what signals to the customer that we’re thinking about them. In the digital space, we focus on user interactions within applications devices and processes.

With the ever-increasing computing power of desktops, browser sophistication and use of native apps, every day we learn of new ways to push the limits of what defines a well-crafted UI. When used correctly, motion can be a key utility in helping your users achieve their goals.

A user’s account on a website is like a house. The password is the key, and logging in is like walking through the front door. When a user can’t remember their password, it’s like losing their keys. When a user’s account is hacked, it’s like their house is getting broken into.

Nearly half of Americans (47%) have had their account hacked in the last year alone. Are web designers and developers taking enough measures to prevent these problems? Or do we need to rethink passwords?

According to Ian Carrington, Google’s mobile and social advertising sales director, speaking at Mobile Marketing Live back in 2012, more people in the world have access to a smartphone than a toothbrush.

With that in mind, it’s perhaps not very surprising that there’s no shortage of information about how people interact with websites on mobile. From specific usability testing and scrutiny of Google Analytics data to more generalized but larger-scale projects, we can quite easily gain access to statistics that illustrate how users interact with our websites.

As creators of the web, we bring innovative, well-designed interfaces to life. We find satisfaction in improving our craft with each design or line of code. But this push to elevate our skills can be self-serving: Does a new CSS framework or JavaScript abstraction pattern serve our users or us as developers?

If a framework encourages best practices in development while also improving our workflow, it might serve both our users’ needs and ours as developers. If it encourages best practices in accessibility alongside other areas, like performance, then it has potential to improve the state of the web.

Despite our pursuit to do a better job every day, sometimes we forget about accessibility, the practice of designing and developing in a way that’s inclusive of people with disabilities. We have the power to improve lives through technology — we should use our passion for the craft to build a more accessible web.

These days, we build a lot of client-rendered web applications, also known as single-page apps, JavaScript MVCs and MV-whatever. AngularJS, React, Ember, Backbone.js, Spine: You may have used or seen one of these JavaScript frameworks in a recent project. Common user experience-related characteristics include asynchronous postbacks, animated page transitions, and dynamic UI filtering. With frameworks like these, creating a poor user experience for people with disabilities is, sadly, pretty easy. Fortunately, we can employ best practices to make things better.

In this article, we will explore techniques for building accessible client-rendered web applications, making our jobs as web creators even more worthwhile.

Semantics

Front-end JavaScript frameworks make it easy for us to create and consume custom HTML tags like <pizza-button>, which you’ll see in an example later on. React, AngularJS and Ember enable us to attach behavior to made-up tags with no default semantics, using JavaScript and CSS. We can even use Web Components3 now, a set of new standards holding both the promise of extensibility and a challenge to us as developers. With this much flexibility, it’s critical for users of assistive technologies such as screen readers that we use semantics to communicate what’s happening without relying on a visual experience.

Consider a common form control4: A checkbox opting you out of marketing email is pretty significant to the user experience. If it isn’t announced as “Subscribe checked check box” in a screen reader, you might have no idea you’d need to uncheck it to opt out of the subscription. In client-side web apps, it’s possible to construct a form model from user input and post JSON to a server regardless of how we mark it up — possibly even without a <form> tag. With this freedom, knowing how to create accessible forms is important.

To keep our friends with screen readers from opting in to unwanted email, we should:

use native inputs to easily announce their role (purpose) and state (checked or unchecked);

provide an accessible name using a <label>, with id and for attribute pairing — aria-label on the input or aria-labelledby pointing to another element’s id.

Native Checkbox With Label

If native inputs can’t be used (with good reason), create custom checkboxes with role=checkbox, aria-checked, aria-disabled and aria-required, and wire up keyboard events. See the W3C’s “Using WAI-ARIA in HTML385.”

Custom Checkbox With ARIA

Form inputs are just one example of the use of semantic HTML6 and ARIA attributes to communicate the purpose of something — other important considerations include headings and page structure, buttons, anchors, lists and more. ARIA7, or Accessible Rich Internet Applications, exists to fill in gaps where accessibility support for HTML falls short (in theory, it can also be used for XML or SVG). As you can see from the checkbox example, ARIA requirements quickly pile up when you start writing custom elements. Native inputs, buttons and other semantic elements provide keyboard and accessibility support for free. The moment you create a custom element and bolt ARIA attributes onto it, you become responsible for managing the role and state of that element.

Although ARIA is great and capable of many things, understanding and using it is a lot of work. It also doesn’t have the broadest support. Take Dragon NaturallySpeaking8 — this assistive technology, which people use all the time to make their life easier, is just starting to gain ARIA support. Were I a browser implementer, I’d focus on native element support first, too — so it makes sense that ARIA might be added later. For this reason, use native elements, and you won’t often need to use ARIA roles or states (aria-checked, aria-disabled, aria-required, etc.). If you must create custom controls, read up on ARIA to learn the expected keyboard behavior9 and how to use attributes correctly.

Web Components and Accessibility

An important topic in a discussion on accessibility and semantics is Web Components, a set of new standards landing in browsers that enable us to natively create reusable HTML widgets. Because Web Components are still so new, the syntax is majorly in flux. In December 2014, Mozilla said it wouldn’t support HTML imports13, a seemingly obvious way to distribute new components; so, for now that technology is natively available in Chrome and Opera14 only. Additionally, up for debate is the syntax for extending native elements (see the discussion about is="" syntax15), along with how rigid the shadow DOM boundary should be. Despite these changes, here are some tips for writing semantic Web Components:

Small components are more reusable and easier to manage for any necessary semantics.

Use native elements within Web Components to gain behavior for free.

Element IDs within the shadow DOM do not have the same scope as the host document.

The same non-Web Component accessibility guidelines apply.

For more information on Web Components and accessibility, have a look at these articles:

Interactivity

Native elements such as buttons and inputs come prepackaged with events and properties that work easily with keyboards and assistive technologies. Leveraging these features means less work for us. However, given how easy JavaScript frameworks and CSS make it to create custom elements, such as <pizza-button>, we might have to do more work to deliver pizza from the keyboard if we choose to mark it up as a new element. For keyboard support, custom HTML tags need:

a keyboard event such as keypress or keydown to trigger callback functions.

Focus Management

Closely related to interactivity but serving a slightly different purpose is focus management. The term “client-rendered” refers partly to a single-page browsing experience where routing is handled with JavaScript and there is no server-side page refresh. Portions of views could update the URL and replace part or all of the DOM, including where the user’s keyboard is currently focused. When this happens, focus is easily lost, creating a pretty unusable experience for people who rely on a keyboard or screen reader.

Imagine sorting a list with your keyboard’s arrow keys. If the sorting action rebuilds the DOM, then the element that you’re using will be rerendered, losing focus in the process. Unless focus is deliberately sent back to the element that was in use, you’d lose your place and have to tab all the way down to the list from the top of the page again. You might just leave the website at that point. Was it an app you needed to use for work or to find an apartment? That could be a problem.

In client-rendered frameworks, we are responsible for ensuring that focus is not lost when rerendering the DOM. The easy way to test this is to use your keyboard. If you’re focused on an item and it gets rerendered, do you bang your keyboard against the desk and start over at the top of the page or gracefully continue on your way? Here is one focus-management technique from Distiller20 using Spine, where focus is sent back into relevant content after rendering:

In this helper class, JavaScript (implemented in CoffeeScript) binds a focusin listener to document.body that checks anytime an element is focused, using event delegation21, and it stores a reference to that focused element. The helper class also subscribes to a Spine rendered event, tapping into client-side rendering so that it can gracefully handle focus. If an element was focused before the rendering happened, it can focus an element in one of two ways. If the old node is identical to a new one somewhere in the DOM, then focus is automatically sent to it. If the node isn’t identical but has a data-focus-id attribute on it, then it looks up that id’s value and sends focus to it instead. This second method is useful for when elements aren’t identical anymore because their text has changed (for example, “item 1 of 5” becoming labeled off screen as “item 2 of 5”).

Each JavaScript MV-whatever framework will require a slightly different approach to focus management. Unfortunately, most of them won’t handle focus for you, because it’s hard for a framework to know what should be focused upon rerendering. By testing rendering transitions with your keyboard and making sure focus is not dropped, you’ll be empowered to add support to your application. If this sounds daunting, inquire in your framework’s support community about how focus management is typically handled (see React’s GitHub repo22 for an example). There are people who can help!

Notifying The User

There is a debate about whether client-side frameworks are actually good for users25, and plenty of people have an opinion26 on them. Clearly, most client-rendered app frameworks could improve the user experience by providing easy asynchronous UI filtering, form validation and live content updates. To make these dynamic updates more inclusive, developers should also update users of assistive technologies when something is happening away from their keyboard focus.

Imagine a scenario: You’re typing in an autocomplete widget and a list pops up, filtering options as you type. Pressing the down arrow key cycles through the available options, one by one. One technique to announce these selections would be to append messages to an ARIA live region27, a mechanism that screen readers can use to subscribe to changes in the DOM. As long as the live region exists when the element is rendered, any text appended to it with JavaScript will be announced (meaning you can’t add bind aria-live and add the first message at the same time). This is essentially how Angular Material28’s autocomplete handles dynamic screen-reader updates:

In the simplified code above (the full directive29 and related controller30 source are on GitHub), when a user types in the md-autocomplete text input, list items for results are added to a neighboring unordered list. Another neighboring element, aria-status, gets its aria-live functionality from the alert role. When results appear, a message is appended to aria-status announcing the number of items, “There is one match” or “There are four matches,” depending on the number of options. When a user arrows through the list, that item’s text is also appended to aria-status, announcing the currently highlighted item without the user having to move focus from the input. By curating the list of messages sent to an ARIA live region, we can implement an inclusive design that goes far beyond the visual. Similar regions can be used to validate forms.

Conclusion

So far, we’ve talked about accessibility with screen readers and keyboards. Also consider readability: This includes color contrast, readable fonts and obvious interactions. In client-rendered applications, all of the typical web accessibility principles33 apply, in addition to the specific ones outlined above. The resources listed below will help you incorporate accessibility in your current or next project.

It is up to us as developers and designers to ensure that everyone can use our web applications. By knowing what makes an accessible user experience, we can serve a lot more people, and possibly even make their lives better. We need to remember that client-rendered frameworks aren’t always the right tool for the job. There are plenty of legitimate use cases for them, hence their popularity. There are definitely drawbacks to rendering everything on the client34. However, even as solutions for seamless server- and client-side rendering improve over time, these same accessibility principles of focus management, semantics and alerting the user will remain true, and they will enable more people to use your apps. Isn’t it cool that we can use our craft to help people through technology?

Imagine two futures of mobile technology: in one, we are distracted away from our real-world experiences, increasingly focused on technology and missing out on what is going on around us; in the other, technology enhances our life experiences by providing a needed boost at just the right time.

The first reality is with us already. When was the last time you enjoyed a meal with friends without it being interrupted by people paying attention to their smartphones instead of you? How many times have you had to watch out for pedestrians who are walking with their faces buried in a device, oblivious to their surroundings?

The second reality could be our future – it just requires a different design approach. We have to shift our design focus from technology to the world around us. As smartwatches and wearables become more popular, we need to create design experiences that allow us to create experiences that are still engaging, but less distracting.

Lessons Learned From A Real-Life Project

We create a future of excessive distraction by treating our devices as small PCs. Cramming too much onto a small screen, and demanding frequent attention on a device that is strapped to your body means you can’t get away from the constant buzzing and beeping right up against your skin. Long, immersive workflows that are easily handled on a larger device become unbearable on a device that has less screen area and physical navigation space.

I noticed this on my first smartwatch project. By designing an application based on our experience with mobile phones, we accidentally created something intrusive, irritating and distracting. That meant the inputs and workflows demanded a lot of attention and were so involved that people had to stop moving in order to view notifications or interact with the device. Our biggest mistake was using the vibration motor on all notifications. If you had a lot of notifications, your smartwatch would buzz constantly. You can’t get away from it and people would actually get angry at the app.

How The Real World Inspired Our Best Approach

In a meeting, I noticed the lead developer glancing down at the smartwatch on his wrist from time to time. As he glanced down, he was still engaged in the conversation. I wasn’t distracted by his behavior. He had configured his smartwatch to only notify him if he got communications from his family, boss or other important people. Once in a while, he interacted with the device for a split second, and continued on with our conversation. Although he was distracted by the device, it didn’t demand his complete attention.

I was blown away at how different his experience was from my smartphone. If my phone buzzes in my pocket or my bag, it completely distracts me and I stop focusing on what is going on around me to attend to the device. I reach into my pocket, pull out the device, unlock the screen, then navigate to the message, decide if it’s important, and then put the device back. Now where were we? Even if I optimize my device settings to smooth some of this interaction out, it takes me much longer to perform the same task on my smartphone because of the different form factor.

This meeting transformed our approach to developing our app for the smartwatch. Instead of creating an immersive device experience that demanded the user’s attention, we decided to create something much more subtle. In fact, we moved away from focusing on application and web development experiences to focusing on application notifications.

Designing With A Different Focus In Mind

Instead of cramming everything we could think of on these smaller devices, we aimed for a lightweight extension of our digital virtual experience into the real world. You could get full control on a PC, but on the smartwatch, we provided notifications, reminders and short summaries. If it was important, and it could be done easily on a smartwatch, we also provided minimal control over that digital experience. If you needed to do more, you could access the system on a smartwatch, or a PC. We had a theory that we could replicate about 60% of PC functionality on a smartphone, and another 20% of that on a smartwatch.

Each different kind of technology should provide a different window on our virtual data and services depending on their technical capabilities and what the user is doing. By providing just the right information, at just the right time, we can get back to focusing on the real world more quickly. We stopped trying to display, direct and control what our end users could do with an app, and relied on their brains and imaginations more. In fact, when we gave them more control, with information in context to help solve the problem they had right then and there, users seemed to appreciate that.

Design To Enhance Real-Life Experiences

After the initial excitement of buying a device wears off, you usually discover that apps really don’t solve the problems you have as you are on the move. When you talk to others about the device, you find it difficult to explain why you even own and use it other than as a geeky novelty.

Now, imagine an app that reminds you of your meeting location because it can tell you are on the wrong floor. Or one that tells you the daily specials when you walk into a coffee shop and also helps you pay. Imagine an app that alerts you to a safety brief as you head towards a work site, or another app that alerts you when you are getting lost in an unfamiliar city. These ideas may seem a bit far off, but they are the sorts of things smartwatches and similar small screen devices could really help with. As Josh Clark says, these kinds of experiences have the potential to amplify our humanity1.

How is this different from a smartphone? A smartphone demands your complete attention, which interrupts your real-world activities. If your smartwatch alerts you to a new text or email, you can casually glance at your wrist, process the information, and continue on with what you were doing. This is more subtle and familiar behavior borrowed from traditional wristwatches, so it is socially acceptable. In a meeting, constantly checking your smartphone is much more visible, disruptive, irritating and perceived as disrespectful. If you glance at your wrist once in a while, that is fine.

It’s important to remember that all of these devices interrupt our lives in some way. I analyze any interruption in our app designs to see if it has a positive effect, a potentially negative effect, or a neutral effect on what the user is doing at the time. You can actually do amazing things with a positive interruption. But you have to be ruthless about what features you implement. The Pebble smartwatch design guide talks about “tiny moments of awesome” that you experience as you are out in the real world. What will your device provide?

Keep The Human In Mind

Our first smartwatch app prototype was a disaster. It was hard to use, didn’t make proper use of the user interface, and when it was tested in the real world, with real-life scenarios, it was downright annoying. Under certain conditions, it would vibrate and buzz, light up the screen and grab your attention needlessly and constantly. People hated it. The development team was ready to dump the whole app and not support smartwatches at all because of the negative testing experience. It is one thing to have a mobile device buzz in your pocket or hand. It is a completely different thing to have something buzzing away that is attached to you and right up against your skin. People didn’t just get annoyed, they got really angry, really quickly – because you can’t escape easily.

Design For The Senses

I knew we had messed up, but I wasn’t sure exactly why. I talked to Douglas Hagedorn, the founder and CEO of Tactalis, a company developing a tactile computer interface for people who are sight-impaired. Doug said that it is incredibly important to understand that different parts of the body have different levels of sensitivity. A vibration against your leg in your trouser pocket might be a mild annoyance, but it could be incredibly irritating if the device vibrates the same way against your wrist. It could be completely unbearable if it is touching your neck (necklace wearable) or on your finger (ring wearable).

Doug also advised me to take more than one sense into account. He mentioned driving a car as an example. If all you do is provide a visual simulation for driving a car, it doesn’t feel correct to your body. That’s because driving a car also has different senses involved. For touch, there is the sensation of sitting in a seat, with a hand on the steering wheel and a hand on the gear shifter, as well as pedals beneath your feet. There are also sensations of movement and sound. All of these together provide the experience of driving a car.

With a smartwatch or wearable, depending only on one sense won’t help make the experience immersive and real. Doug advised using different notification features on the devices to signify different things. Design so that physical vibrations are for one type of interaction and a screen glow is used for another. That way the user observes a blend of virtual experiences similarly to how they experience the real world.

Understand Context

Because the devices are attached to us, they constantly move, and are looked at and interacted with at awkward angles. Users must be able to read whatever you put on the screen, and easily interact while moving. When moving, it is far more difficult to read and input into the screen. When sitting down, the device and your body are more stable and we can tolerate far more device interaction. Ask critically:

Understand Emotions

Our emotions vary depending on experiences and contexts, which can be extremely intense and intimate, or bland and public. Our emotional state at a particular point in time has an enormous impact on what we expect from technology. If we are fearful or anxious and in a rush, we have far less patience for an awkward user experience or slow performance. If we are happy or energetic, we will have more patience with areas where the app experience might be weaker.

Since these devices are taken with us wherever we go, they are used in all sorts of conditions and situations. We have no control over people’s emotions so we need to be aware of the full range and make sure our app supports them. It’s also important to provide user control to turn off or mute notifications if they are inappropriate at that time. When people have no control over something that is bothering them, negative emotions can intensify quickly.

Spend time on user research and create personas to help you understand your target user.

Create impact stories for core features – a happy ending story, a sad ending story, and an unresolved story.

Also create storyboards (see Figure 2) to demonstrate the fusion of your virtual solution with the real world.

We usually spend more time on these efforts than the visual design because we can incorporate context, emotions, and error conditions early on. We can use these dimensions to analyze our features and remove those that don’t make sense once they meet the real world

It is incredibly important to test away from the development lab, out of your building. It is vital to try things out in the real world because it has very different conditions to a development lab. For each scenario, also simulate different conditions that cause different reactions and make them realistic:

Simulate stress by setting impossible timelines on a task using the device.

Simulate fear by threatening a loss if the task isn’t completed properly.

Simulate happiness by rewarding warmly.

Weather conditions have an effect as well. I am far less patient with application performance when it is cold or very hot, and my fingers don’t work as well on a touchscreen in either of those situations. As devices will be used in all weathers, with all kinds of emotions and time pressure, simulating these conditions when testing your designs is eye-opening.

Minimize Interruptions

When we do need to distract people, we should make the notifications high-quality. As we design workflows, screen designs and user interactions, we need to treat them as secondary to the real world so we can enhance what is going on around people rather than detracting from their day-to-day lives.

Try to create apps for notifications and lightweight remote control that help focus on creating an experience that relies on quick information gathering, and making the odd adjustment on the fly. Users stop, read a message, interact easily and quickly, and then move on. They spend only seconds in the app at any given time, rather than minutes.

The frequency of notifications should be minimal so the device doesn’t constantly nag and irritate the wearer. Allow the wearer to configure timing and types of notifications and to easily disable them when needed. During a client consultation it might be completely inappropriate to get notifications, whereas it might be fine while commuting home. Also provide users with the final say in how and when they are notified. A vibration and a screen glow is fine in some contexts, but in others, just a screen glow will suffice since it won’t disturb others.

Design Elegant And Minimalistic Visual Experiences

One of my favorite stories of minimalism in a portable device design is from the PalmPilot project. It’s said that the founder of Palm, Jack Hawkins, walked around with a carved piece of wood that represented the PalmPilot prototype. Any new features had to be laid out physically on the block of wood, and if there wasn’t room on it they had to decide what to do. Could the features be made smaller? If not, what other feature had to be cut from the design? They knew that every pixel counted. We need to be just as careful and demanding in our wearable app decisions.

Since these devices have small screens or no screens, there is a limit to the information that is displayed. For example, prioritize to show only the most important information needed at that moment. Work on summaries and synthesizing information to provide just enough. Use a newspaper headline rather than a paragraph.

Small Screens

Screens on wearables are very small and the resolutions can feel tiny. These devices also come in all shapes and (small) sizes. Beyond various rectangular combinations, some smartwatch and wearable screens are round. It’s important to design for the resolution of the device as well, and these can vary widely from device to device. Some current examples are: 128×128px, 144×168px, 220×176px, 272×340px, 312×390px, 320×290px, and 320×320px.

Screen resolutions on all devices are increasing, so this is something to keep on top of as new devices are released. If you are designing for different screen sizes, it is probably useful to focus on aspect ratios, since this can reduce your design efforts if different sizes share the same aspect ratio.

When working on responsive websites, you may encounter resolutions as high as 2,880×1,800px on PC displays, down to 480×320px on a small smartphone. When we designed for wearables we believed we could simply shrink the features and visual design further. This was a huge mistake, so we started over from scratch.

We decided to sketch our ideas on paper prior to building a prototype app. This helped tremendously because we were able to analyze designs and simulate user interactions before putting a lot of effort into coding. It was difficult to reach our app ambitions with such a tiny screen. A lot of features were cut, and it was painful at first, but we got the hang of it eventually.

No Screens

Many wearables have no screens at all, or they have a minimal screen that is reminiscent of an old LCD clock radio. Many devices are limited to UIs that only contain number shapes, a limited amount of text and little else. Other devices have no screen at all, relying on vibration motors and blinking lights to get people’s attention.

App engagement while wearing no-screen devices occurs mostly in our brains, aside from the odd alert or alarm through a vibration or blinking light. When devices are synced, a corresponding larger screen offers more details. This multiscreen experience reinforces the story narrative while they are away from a screen using only a wearable. This is more of a service-based approach than a standalone app approach. User data is stored externally (in the cloud), and display, interaction and utility are different depending on the device. The strong narrative that is reinforced in higher-fidelity devices helps persist it across device types. This different view on user-generated data also encourages self-discipline, a sense of completion or accomplishment, competition, and a whole host of feelings and emotions that exist outside of the actual technology experience.

Design Aesthetics

Design aesthetics are incredibly important because wearables extend a user’s personal image. Anything that we put on the screen should also be visually pleasing because it will be seen not only by the wearer but those around them. Minimalist designs are therefore ideal for smartwatches and wearables. Make good use of formatting and the limited whitespace. Use large fonts and objects that can be seen and interacted with while on the move. If you can, use a bit of color to grab attention and create visual interest.

If you work in the tech industry, it’s easy to forget that older people exist. Most tech workers are really young1, so it’s easy to see why most technology is designed for young people. But consider this: By 2030, around 19% of people in the US will be over 652. Doesn’t sound like a lot? Well it happens to be about the same number of people in the US who own an iPhone today. Which of these two groups do you think Silicon Valley spends more time thinking about?

This seems unfortunate when you consider all of the things technology has to offer older people. A great example is Speaking Exchange3, an initiative that connects retirees in the US with kids who are learning English in Brazil. Check out the video below, but beware — it’s a tear-jerker.

While the ageing process is different for everyone, we all go through some fundamental changes. Not all of them are what you’d expect. For example, despite declining health, older people tend to be significantly happier5 and better at appreciating what they have6.

But ageing makes some things harder as well, and one of those things is using technology. If you’re designing technology for older people, below are seven key things you need to know.

(How old is old? It depends. While I’ve deliberately avoided trying to define such an amorphous group using chronological boundaries, it’s safe to assume that each of the following issues becomes increasingly significant after 65 years of age.)

Vision And Hearing

From the age of about 40, the lens of the eye begins to harden, causing a condition called “presbyopia.” This is a normal part of ageing that makes it increasingly difficult to read text that is small and close.

7Here’s a 75-year-old with his Kindle. Take a look at the font size he picks when he’s in control. Now compare it to the average font size on an iPhone. (Image: Navy Design28208.) (View large version9)

Color vision also declines with age, and we become worse at distinguishing between similar colors. In particular, shades of blue appear to be faded or desaturated.

Hearing also declines in predictable ways, and a large proportion of people over 65 have some form of hearing loss10. While audio is seldom fundamental to interaction with a product, there are obvious implications for certain types of content.

Provide subtitles when video or audio content is fundamental to the user experience.

Motor Control

Our motor skills decline with age, which makes it harder to use computers in various ways. For example, during some user testing at a retirement village, we saw an 80-year-old who always uses the mouse with two hands. Like many older people, she had a lot of trouble hitting interface targets and moving from one thing to the next.

Device Use

It’s safe to assume Dustin has never watched a 75-year-old use a mobile phone. Eventually, changes in vision and motor control make small screens impractical for everyone. Smartphones are a young person’s tool18, and not even the coolest teenager can escape their biological destiny.

In our research, older people consistently described phones as “annoying” and “fiddly.” Those who own them seldom use them, often not touching them for days at a time. They often ignore SMS’ entirely.

But older people aren’t afraid to try new technology when they see a clear benefit. For example, older people are the largest users of tablets22. This makes sense when you consider the defining difference between a tablet and a phone: screen size. The recent slump in tablet sales23 also makes sense if you accept that older people have longer upgrade cycles than younger people.

Key lessons:

Avoid small-screen devices (i.e. phones).

Don’t rely on SMS to convey important information.

Relationships

Older people have different relationships than young people, at least partly because they’ve had more time to cultivate them. For example, we conducted some research into how older people interact with health care professionals. In many cases, they’ve seen the same doctors for decades, leading to a very high degree of trust.

I regard it like going to see old pals.… I feel I could tell my GP almost anything.

– George, 73, on visiting his medical team

But due to health and mobility issues, the world available to the elderly is often smaller — both physically and socially. Digital technology has an obvious role to play here, by connecting people virtually when being in the same room is hard.

Key lessons:

Enable connection with a smaller, more important group of people (not a big, undifferentiated social network).

Don’t overemphasize security and privacy controls when trusted people are involved.

Be sensitive to issues of isolation.

Life Stage

During a user testing session, I sat with a 66-year-old as she signed up for an Apple ID. She was asked to complete a series of security questions. She read the first question out loud. “What was the model of your first car?” She laughed. “I have no idea! What car did I have in 1968? What a stupid question!”

It’s natural for a 30-year-old programmer to assume that this question has meaning for everyone, but it contains an implicit assumption about which life stage the user is at. Don’t make the same mistake in your design.

Key lessons:

Beware of content or functionality that implicitly assumes someone is young or at a certain stage in life.

Experience With Technology

I once sat with a man in his 80s as he used a library interface. “I know there are things down there that I want to read” he said, gesturing to the bottom of the screen, “but I can’t figure out how to get to them.” After I taught him how to use a scrollbar, his experience changed completely. In another session, two of the older participants told me that they’d never used a search field before.

Generally when you’re designing interfaces, you’re working within a certain kind of scaffolding. And it’s easy to assume that everyone knows how that scaffolding works. But people who didn’t grow up with computers might have never used the interface elements we take for granted. Is a scrollbar a good design for moving content up and down? Is its function self-evident? These aren’t questions most designers often ask. But the success of your design might depend on a thousand parts of the interface that you can’t control and probably aren’t even aware of.

Key lessons:

Don’t make assumptions about prior knowledge.

Interrogate all parts of your design for usability, even the parts you didn’t create.

Cognition

The science of cognition is a huge topic, and ageing changes how we think in unpredictable ways. Some people are razor-sharp in their 80s, while others decline as early as in their 60s.

Despite this variability, three areas are particularly relevant to designing for the elderly: memory, attention and decision-making. (For a more comprehensive view of cognitive change with age, chapter 1 of Brain Aging: Models, Methods, and Mechanisms24 is a great place to start.)

Memory

There are different kinds of memory, and they’re affected differently by the ageing process. For example, procedural memory (that is, remembering how to do things) is generally unaffected. People of all ages are able to learn new skills and reproduce them over time.

But other types of memory suffer as we age. Short-term memory and episodic memory are particularly vulnerable. And, although the causes are unclear, older people often have difficulty manipulating the contents of their working memory25. This means that they may have trouble understanding how to combine complex new concepts in a product or interface.

Prospective memory (remembering to do something in the future) also suffers26. This is particularly relevant for habitual tasks, like remembering to take medication at the right time every day.

How do people manage this decline? In our research, we’ve found that paper is king. Older people almost exclusively use calendars and diaries to supplement their memory. But well-designed technology has great potential to provide cues for these important actions.

Introduce product features gradually over time to prevent cognitive overload.

Avoid splitting tasks across multiple screens if they require memory of previous actions.

During longer tasks, give clear feedback on progress and reminders of goals.

Provide reminders and alerts as cues for habitual actions.

Attention

It’s easy to view ageing as a decline, but it’s not all bad news. In our research, we’ve observed one big advantage: Elderly people consistently excel in attention span, persistence and thoroughness. Jakob Nielsen has observed similar things, finding that 95% of seniors are “methodical”30 in their behaviors. This is significant in a world where the average person’s attention span has actually dropped below the level of a goldfish31.

It can be a great feeling to watch an older user really take the time to explore your design during a testing session. And it means that older people often find things that younger people skip right over. I often find myself admiring this way of interacting with the world. But the obvious downside of a slower pace is increased time to complete tasks.

Avoid dividing users’ attention between multiple tasks or parts of the screen.

Decision-Making

Young people tend to weigh a lot of options before settling on one. Older people make decisions a bit differently. They tend to emphasize prior knowledge34 (perhaps because they’ve had more time to accumulate it). And they give more weight to the opinions of experts (for example, their doctor for medical decisions).

The exact reason for this is unclear, but it may be due to other cognitive limitations that make comparing new options more difficult.

Key lessons:

Prioritize shortcuts to previous choices ahead of new alternatives.

Information framed as expert opinion may be more persuasive (but don’t abuse this bias).

Conclusion

A lot of people in the tech industry talk about “changing the world” and “making people’s lives better.” But bad design is excluding whole sections of the population from the benefits of technology. If you’re a designer, you can help change that. By following some simple principles, you can create more inclusive products that work better for everyone, especially the people who need them the most.

What you say in a user experience matters. How you say it matters equally. The way you frame communication, or how you say something, could be extremely effective at persuading people to start using your product (or to use it more).

So, how do you frame messages effectively? This article explains how design teams can do so in a way that resonates with their users.

Help! I’ve Been Framed!

Framing is how you say something, using a “frame of communication.”

Frames are story lines that make an issue relevant to a particular audience. Framing is not lying. It is putting a particular spin (a frame) on factual details.

Framing effects occur when a message frame alters someone’s opinion on an issue.

For example, telling someone that smoking causes cancer and that they should consider quitting is not likely to produce any long-lasting change in their opinion towards smoking. Most smokers have heard these words all their life. However, smokers who view this (caution: video may be unsuitable for some viewers) video1 by the Centers for Disease Control and Prevention (CDC), which frames the consequences of smoking in a very graphic way, have found it to have a long-lasting impact on their attitude towards smoking. In this case, both the message and the medium make the video a more powerful frame of communication.

How Framing Applies To Good Design

We have talked about framing messages, but what’s that got to do with design? Everything. Everyone on a UX design team plays a role in effectively framing messaging and design. Frames consist of the words, images, metaphors, comparisons and presentation styles to communicate an issue.

“There is no such thing as unframed information, and most successful communicators are adept at framing.”

Nisbet makes it clear: Accounting for framing should be a part of your overall content strategy. Good content doesn’t just happen; it requires the same level of detail that you apply to the rest of your design.

Let’s check out some examples of framing, as well as how to use visual design to frame a message for greater impact.

Suppose you are designing for a bank that provides mortgages to clients. The bank’s target demographic is upwardly mobile young professionals: college graduates ages 28 to 35, with a household income at or near six figures annually. Your client would like these customers to apply for mortgages. Your job is to frame the message of the public-facing mortgage page on the website.

The way you frame communication, or how you say something, could be extremely effective at persuading people to start using your product (or to use it more). (Image credit4)

Framing Without A Visual Aid (Message Is Words Only)

You find through pre-design user interviews that users in the target demographic often check out the current annual percentage rate (APR) when surfing your client’s website. You can frame the APR for a mortgage as follows:

Today’s mortgage APR: 3.75% for a 30-year fixed mortgage. Save today!

Potential borrowers don’t have much to get excited about. The message is short, which is positive. However, the 3.75% APR and 30-year term aren’t concepts that most people find instantly relatable. Is 3.75% good? What was the rate yesterday? What will it be tomorrow? Why 30 years? What can this interest rate do for me over that length of time? Should I wait? It does say “save today,” but I’m pretty busy today. I should probably wait. The bank doesn’t seem to be too concerned that this rate is going anywhere.

You can present the same information like this:

Today’s mortgage APR is at an all time low of 3.75%. Complete our pre-qualification form now to lock in this rate. This rate would save you enough money on a $250,000 loan over 20 years to send your child to college when compared to an increase of just 1%, which could happen at any time.

You have framed the message to motivate behavior: Act now! Rates could change at any time. You have presented the user with context to motivate them to apply for a mortgage in the near term: Rates are at an all-time low. This means they were higher yesterday or last week. This means they might be higher tomorrow or next week.

While the 3.75% is still a somewhat murky concept, the user does see that this would save them enough to send a child to college in 20 years — unlike a 4.75% rate, which really doesn’t sound like that much more, but must be. The user is thinking of having or adopting a child within the next few years, which would make saving enough to send the kid to college over that time period perfect. Also, it is clear what they need to do next: fill out the pre-qualification form and get in touch with the mortgage officer.

Both of the examples above require reading and a deep level of comprehension to motivate the user. This is where visual framing comes into play. Let’s use the same bank and target demographic. When users land on the APR page, they see the following:

Today’s mortgage APR is at an all time low of 3.75%.

If you lock in today’s low rate, your family will be able to relax in its dream home for years to come. (Image: David Sawyer5)If you wait and rates rise, your new home might not have room for the grandparents to visit! (Image: simpleinsomnia6)

Users will be much more motivated to engage in behavior that leads to their dream home (act now), rather than the very sad shack that might not have enough room for the grandparents when they visit (wait). You have made your point without putting the focus on understanding the 3.75% rate, and you have preempted the user’s internal dialogue from the first two examples.

Let’s consider another example of the impact of visuals on framing information.

Suppose you are going to be giving a presentation on fire safety to first-graders. You need to grab their attention immediately or else you will lose them for the entire session. How might you kick off the visuals in your presentation? Here are two ways to frame fire safety and prevention:

First example of opening slide about fire safety and prevention (Image: DocStoc7)Second example of opening slide about fire safety and prevention (Image: Wikipedia8)

Which opening slide do you think is more likely to grab the attention of a first-grader, or anyone for that matter? You have presented your audience with the same information, but you will likely get two very different reactions. Effective framing in this case means the difference between snores and cheers. The second example will captivate much of your audience for the important stuff that follows.

Now that we have covered framing and design, let’s look at some tried-and-true techniques that you can use to effectively frame messages.

Effectively Framing Messages Sounds Great. How Do I Do It?

Private industry is, predictably, on the cutting edge of marketing techniques. However, nonprofits and the US government are well aware of the importance of effective framing. The CDC in particular has invested a lot of resources into researching how to effectively frame public health issues, including fire, injury and smoking.

The process described below for developing a well-framed message is adapted from the CDC’s research-based guide9 (PDF, 1.35 MB) on framing messages for injury prevention. I also used this modified method in my dissertation to create different messages to test on zoo visitors.

Identify Your Target Audience

First, decide exactly whom you are speaking to.

You can identify your target audience in a number of ways. Involve as many of your core team members as possible. Have you done any research on audience segmentation? If so, start by creating a message that will appeal to one of your largest audience segments. If you haven’t discussed your target audience, now is a good time to start.

I have one rule for identifying a target audience. Your key audience cannot be everybody!

If you think you can develop a message that will appeal to everyone at the same time, let me save you the effort by saying you can’t. Rather, you would say different things to different people to motivate them.

One-size-fits-all doesn’t work with t-shirts, and it doesn’t work with messages.

In my dissertation, I targeted English-speaking adult visitors to natural history museums, science centers and zoos in the US.

Identify A Frame For Your Messages

Many frames exist. Choose one, and use it consistently throughout your messaging.

Examples of Frames

Value-based
We know that people make decisions based on more than just the facts alone. Values-based frames access users’ underlying values to motivate them to engage in a desired behavior. Common Cause has a guide10 on values and framing.

Financial benefits
This frame highlights the financial benefits of engaging in a particular behavior.

Gain
This focuses on what users will gain from engaging (or not engaging) in a particular behavior.

Loss
A loss frame focuses on what users will lose from engaging (or not engaging) in a behavior.

Use of metaphors
Metaphors make abstract topics more concrete or understandable. Political communication13 (PDF, 277 KB) often uses metaphors.

Use of visuals
Visuals play a key role in framing messages. The Frameworks Institute notes14 (PDF, 212 KB) that the importance of visuals doesn’t stop at the raw content. Message creators also need to consider the placement and sequence of visuals.

Make A Strong And Clear Statement About The Product

What do you want people to take away from your message? You can’t assume that you can bury this under an avalanche of witty euphemisms or roundabout references to what your product does. Be clear.

Incorporate the following principles to create a strong and clear message.

Use Positive Language and Avoid Negativity

Focus on how great the product is or how important the cause is, rather than how terrible the alternatives are (doing that would just make your product seem less bad, not more good). If you cast stones at the competition, expect nothing but the same in return.

Highlight Personal Responsibility And Control: Empower Your Users

Your message should explicitly show how using your product will give users more control. For example, telling users that your financial management software will put them in charge of their financial future makes for a much stronger message than simply noting how many options the software provides for sorting transactions in different categories.

Avoid Jargon (Your Field Doesn’t Have Jargon, Right?)

By avoiding jargon, you avoid assuming that your audience has background knowledge of your product. If your target audience is heavily involved in your field, then you might want to incorporate some industry-specific language to make a stronger connection with those users. You don’t always have to target the lowest common denominator; however, doing so allows your message to be understandable to the broadest number of potential users.

Include a Call to Action: Tell Users What You Want Them to Do!

Do you want users to purchase something, to get more information, to call their local politician? Be explicit and direct. If you have constructed an effective message, then be confident in stating what you want the audience to do with that information. Your message’s visual design is critical to this point. Are you clearly displaying what actions your users should take?

For Longer Messages and Persuasive Essays

If you are framing a long message or an essay, consider additional factors. A well-framed longer message includes the following:

A title or headline that tells the reader what the message is about and why they should care.

No more than one key message.

A lead paragraph that captures the reader’s attention.

A “nut” paragraph (i.e. the heart of your story — the details go here).

Relevant quotes to make the topic more relatable.

Your chances of successfully framing a message increase by following the guidance presented above. However, there is one more requirement to effectively framing a message.

Test Your Message

Test your message before unleashing it on users. Don’t assume what people know or how they will understand something. By testing your message, you ensure that your frame comes across clearly.

Testing can be simple and not resource-intensive. Everyone on the design team should work together here. Ideally, you would use the frame(s) you are considering to formulate multiple messages. I also recommend testing what your team thinks are the worst one or two messages it’s created. You’d be surprised by what resonates with users. This is the entire point of user research: You can’t assume what the user wants; find ways to get users to tell you what they want!

You can test messages the old-fashioned way by printing out the designs, laminating them and approaching people in scenarios that would be typical for your product. Seeing how someone responds to a message can be eye-opening. Pictures are worth a thousand words, as are facial expressions.

You can also conduct research online. You can easily insert screenshots into survey questions using online survey software, such as SurveyMonkey15 or SurveyGizmo16. Many testing services will also recruit participants according to your specific demographics. Testing through a service such as UserTesting17 is also very quick and inexpensive.

Ask Seven Questions

Once you’ve developed your messages and designs, ask potential users the following seven questions:

Does this message make sense?

How does this message make you feel?

What do you think this message is asking you to do? (Ask this even if the message isn’t asking for anything.)

With whom do you think this message will resonate?

What would you change about this message to make it clearer?

What would you change about this message to make it speak directly to you?

What do you feel this message does well?

And if you are comparing multiple messages, then ask this question too:

Which message do you think resonates the most? Why?

The number of people you test your message on will depend on the outcome you wish to achieve. Test on as many people as you feel is useful; don’t feel you have to conduct a study worthy of publication in an academic journal. If you speak to 10 representative users and they all give you similar responses, then you might be comfortable moving forward. Their feedback will at least give you insight into potential confusion or misunderstanding of the terminology in your messages. If the responses are varied, then your message is probably not coming across clearly. Incorporate the feedback above to make the message clearer, and then retest the new message.

I tested my dissertation messages with visitors to a local art museum before deploying them in my studies. I tested each message on 20 visitors, asking them whether the message was clear. I asked participants to identify which frame they felt I was using (to ensure that I had framed the messages clearly). I also used my committee of four, each with a PhD, to check the quality of the messages. Then, I conducted research using a number of survey questions to determine characteristics of visitors and how they perceived the messages.

Other Methods of Testing

You can test messages using other methods as well. For example, you could pose the same questions listed above to a focus group. A/B testing18 will also reveal which of two (or more) messages users prefer.

Putting It All Together

We’ve covered how to effectively frame a message, and how to test it before implementation. Design teams need to give deeper thought to how they are conveying their message, not just what they are saying. Outlined above is a process for creating and testing a message, which will help you communicate clearly and effectively with users. Your messages will resonate with them. Use this information to reassess your current messaging, and to move forward with future messaging.