A List Apart

Never Show A Design You Haven’t Tested On Users

It isn’t hard to find a UX designer to nag you about testing your designs with actual users. The problem is, we’re not very good at explaining why you should do user testing (or how to find the time). We say it like it’s some accepted, self-explanatory truth that deep down, any decent human knows is the right thing to do. Like “be a good person” or “be kind to animals.” Of course, if it was that self-evident, there would be a lot more user testing in this world.

Let me be very specific about why user testing is essential. As long as you’re in the web business, your work will be exposed to users.

If you’re already a user-testing advocate, that may seem obvious, but we often miss something that’s not as clear: how user testing impacts stakeholder communication and how we can ensure testing is built into projects, even when it seems impossible.

The most devilish usability issues are those that haven’t even occurred to you as potential problems; you won’t find all the usability issues just by looking at your design. User testing is a way to be there when it happens, to make sure the stuff you created actually works as you intended, because best practices and common sense will get you only so far. You need to test if you want to innovate, otherwise, it’s difficult to know whether people will get it. Or want it. It’s how you find out whether you’ve created something truly intuitive.

How testing up front saves the day

Last fall, I was going to meet with one of our longtime clients, the charity and NGO Plan International Norway. We had an idea for a very different sign-up form than the one they were using. What they already had worked quite well, so any reasonable client would be a little skeptical. Why fix it if it isn’t broken, right? Preparing for the meeting, we realized our idea could be voted down before we had the chance to try it out.

We decided to quickly put together a usability test before we showed the design.

At the meeting, we began by presenting the results of the user test rather than the design itself.

We discussed what worked well, and what needed further improvement. The conversation that followed was rational and constructive. Together, we and our partners at Plan discussed different ways of improving the first design, rather than nitpicking details that weren’t an issue in the test. It turned out to be one of the best client meetings I’ve ever had.

We went from paper sketch to Illustrator sketch to InVision in a day in order to get ready for the test.

User testing gives focus to stakeholder feedback

Naturally, stakeholders in any project feel responsible for the end result and want to discuss suggestions, solutions, and any concerns about your design. By testing the design beforehand, you can focus on the real issues at hand.

Don’t worry about walking into your client meeting with a few unsolved problems. You don’t need to have a solution for every user-identified issue. The goal is to show your design, make clear what you think needs fixing, and ideally, bring a new test of the improved design to the next meeting.

By testing and explaining the problems you’ve found, stakeholders can be included in suggesting solutions, rather than hypothesizing about what might be problems. This also means that they can focus on what they know and are good at. How will this work with our CRM system? Will we be able to combine this approach with our annual campaign?

Since last fall, I’ve been applying this dogma in all the work that I do: never show a design you haven’t tested. We’ve reversed the agenda to present results first, then a detailed walkthrough of the design. So far, our conversations about design and UX have become a lot more productive.

Making room for user testing: sell it like you mean it

Okay, so it’s a good idea to test. But what if the client won’t buy it or the project owner won’t give you the resources? User testing can be a hard sell—I know this from experience. Here are four ways to move past objections.

Don’t make it optional

It’s not unusual to look at the total sum in a proposal, and go, Uhm, this might be a little too much. So what typically happens? Things that don’t seem essential get trimmed. That usability lab test becomes optional, and we convince ourselves that we’ll somehow persuade the client later that the usability test is actually important.

But how do you convince them that something you made optional a couple of months ago is now really important? The client will likely feel that we’re trying to sell them something they don’t really need.

Describe the objective, not the procedure

A usability lab test with five people often produces valuable—but costly—insight. It also requires resources that don’t go into the test itself: e.g., recruiting and rewarding test subjects, rigging your lab and observation room, making sure the observers from the client are well taken care of (you can’t do that if you’re the one moderating the test), and so on.

Today, rather than putting “usability lab test with five people” in the proposal, I’ll dedicate a few days to: “Quality assurance and testing: We’ll use the methods we deem most suitable at different stages of the process (e.g., usability lab test, guerilla testing, click tests, pluralistic walkthroughs, etc.) to make sure we get it right.”

I have never had a client ask me to scale down the “get it right” part. And even if they do ask you to scale it down, you can still pull it off if you follow the next steps.

Scale down documentation—not the testing

If you think testing takes too much time, it might be because you spend too much time documenting the test. In a lab test, it’s a good idea to have 20 to 30 minutes between each test subject. This gives you time to summarize (and maybe even fix) the things you found in each test before you move on to the next subject. By the end of the day, you have a to-do list. No need to document it any more than that.

I’ve also found InVision’s comment mode useful for documenting issues discovered in the tests. If we have an HTML and CSS prototype, screenshots of the relevant pages can be added to InVision, with comments placed on top of the specific issues. This also makes it easy for the client to contribute to the discussion.

After the test is done, we’ve already fixed some of the problems. The rest ends up in InVision as a to-do on the relevant page. The prototype is actually in HTML, CSS, and JavaSCript, but the visual aspect of InVision’s comment feature make it much easier to avoid misunderstandings.

Scale down the prototype—not the testing

You don’t need a full-featured website or a polished prototype to begin testing.

If you wonder if something looks clickable, a flat Photoshop sketch will do.

Even a paper sketch will work to see if you’re on the right track.

And if you test at this early stage, you’ll waste much less time later on.

Low-cost, low-effort techniques to get you started

You can do this. Now, I’m going to show you some very specific ways you can test, and some examples from projects I’ve worked on.

Pluralistic walkthrough

Time: 15 minutes and up

Costs: Free

A pluralistic walkthrough is UX jargon for asking experts to go through the design and point out potential usability issues. But putting five experts in a room for an hour is expensive (and takes time to schedule). Fortunately, getting them in the same room isn’t always necessary.

At the start of a project, I put sketches or screenshots into InVision and post it in our Slack channels and other internal social media. I then ask my colleagues to spend a couple of minutes critiquing it. As easy as that, you’ll be able to weed out (or create hypotheses about) the biggest issues in your design.

Before the usability test, we asked colleagues to comment (using InVision) on what they thought would work or not.

Hit the streets

Time: 1–3 hours

Costs: Snacks

This is a technique that works well if there’s something specific you want to test. If you’re shy, take a deep breath and get over it. This is by far the most effective way of usability testing if you’re short on resources. In the Labour Party project, we were able to test with seven people and summarize our findings within two hours. Here’s how:

Get a device that’s easy to bring along. In my experience, an iPad is most approachable.

Bring candy and snacks. Works great to have a basket of snacks and put the iPad on the basket too.

Go to a public place with lots of people, preferably a place where people might be waiting (e.g., a station of some sort).

Approach people who look like they are bored and waiting; have your snacks (and iPad) in front of you, and say: “Excuse me, I’m from [company]. Could I borrow a couple of minutes from you? I promise it won’t take more than five minutes. And I have candy!” (This works in Norway, and I’m pretty sure food is a universal language). If you’re working in teams of two, one of you should stay in the background during the approach.

If you’re alone, take notes in between each test. If there are two of you, one person can focus on taking notes while the other is moderating, but it’s still a good idea to summarize between each test.

Morten and Ida are about to go to the Central Station in Oslo, Norway, to test the Norwegian Labour Party’s new site for crowdsourcing ideas. Don’t forget snacks!

Online testing tools

Time: 30 minutes and up

Costs: Most tools have limited free versions. Optimal Workshop charges $149 for one survey and has a yearly plan for $1990.

There isn’t any digital testing tool that can provide the kind of insight you get from meeting real users face-to-face. Nevertheless, digital tools are a great way of going deeper into specific themes to see if you can corroborate and triangulate the data from your usability test.

There are many tools out there, but my two favorites are Treejack and Chalkmark from Optimal Workshop. With Treejack, it rarely takes more than an hour to figure out whether your menus and information architecture are completely off or not. With click tests like Chalkmark, you can quickly get a feel for whether people understand what’s clickable or not.

A Chalkmark test of an early Illustrator mockup of Plan’s new home page. The survey asks: “Where would you click to send a letter to your sponsored child?” The heatmap shows where users clicked.

Nothing kills arguments over menus like this baby. With Treejack, you recreate the information architecture within the survey and give users a task to solve. Here we’ve asked: “You wonder how Plan spends its funds. Where would you search for that?” The results are presented as a tree of the paths the users took.

Using existing audience for experiments

Time: 30 minutes and up

Costs: Free (e.g., using Hotjar and Google Analytics).

One of the things we designed for Plan was longform article pages, binding together a compelling story of text, images, and video. It struck us that these wouldn’t really fit in a usability test. What would the task be? Read the article? And what were the relevant criteria? Time spent? How far he or she scrolled? But what if the person recruited to the test wasn’t interested in the subject? How would we know if it was the design or the story that was the problem, if the person didn’t act as we hoped?

Since we had used actual content and photos (no lorem ipsum!), we figured that users wouldn’t notice the difference between a prototype and the actual website. What if we could somehow see whether people actually read the article when they stumbled upon it in its natural context?

The solution was for Plan to share the link to the prototyped article as if it were a regular link to their website, not mentioning that it was a prototype.

The prototype was set up with Hotjar and Google Analytics. In addition, we had the stats from Facebook Insights. This allowed us to see whether people clicked the link, how much time they spent on the page, how far they scrolled, what they clicked, and even what they did on Plan’s main site if they came from the prototyped article. From this we could surmise that there was no indication of visual barriers (e.g., a big photo making the user think the page was finished), and that the real challenge was actually getting people to click the link in the first place.

On the left is the Facebook update from Plan. On the right is the heat map from Hotjar, showing how far people scrolled, with no clear drop-out point.

Did you get it done? Was this useful?

Time: A few days or a week to set up, but basically no time spent after that

Costs: No cost if you build your own; Task Analytics from $950 a month

Sometimes you need harder, bigger numbers to be convincing. This often leads people to A/B testing or Google Analytics, but unless what you’re looking for is increasing a very specific conversion, even these tools can come up short. Often you’d gain more insight looking for something of a middle ground between the pure quantitative data provided by tools like Google Analytics, and the qualitative data of usability tests.

“Was it helpful?” modules are one of those middle-ground options I try to implement in almost all of my projects. Using tools like Google Tag Manager, you can even combine the data, letting you see the pages that have the most “yes” and “no” votes on different parts of your website (content governance dream come true, right?). But the qualitative feedback is also incredibly valuable for suggesting specific things your design is lacking.

“Was this article helpful?” or “Did you find what you were looking for?” are simple questions that can give valuable insight.

This technique falls short if your users weren’t able to find a relevant article. Those folks aren’t going to leave feedback—they’re going to leave. Google Analytics isn’t of much help there, either. That high bounce rate? In most cases you can only guess why. Did they come and go because they found their answer straight away, or because the page was a total miss? Did they spend a lot of time on the page because it was interesting, or because it was impossible to understand?

My clever colleagues made a tool to answer those kinds of questions. When we do a redesign, we run a Task Analytics survey both before and after launch to figure out not only what the top tasks are, but whether or not people were able to complete their task.

When the user arrives, they’re asked if they want to help out. Then they’re asked to do whatever they came for and let us know when they’re done. When they’re done, we ask a) “What task did you come to do?” and b) “Did you complete the task?”

This gives us data that is actionable and easily understood by stakeholders. At our own website, the most common task people arrive for is to contact an employee, and we learned that one in five will fail. We can fix that. And afterward, we can measure whether or not our fix really worked.

Why do people come to Netlife Research’s website, and do they complete their task? Screenshot from Task Analytics dashboard.

Set up a usability lab and have a weekly drop-in test day

Time: 6 hours per project tested + time spent observing the test

Costs: rewarding subjects + the minimal costs of setting up a lab

Setting up a usability lab is basically free in 2016:

A modern laptop has a microphone and camera built in. No need to buy that.

Other than that, you just need a room that’s big enough for you and a user. So even as a UX team of one, you can afford your own usability lab. Setting up a weekly drop-in test makes sense for bigger teams. If you’re at twenty people or more, I’d bet it would be a positive return on investment.

My ingenious colleague Are Halland is responsible for the test each week. He does the recruiting, the lab setup, and the moderating. Each test day consists of tests with four different people, and each person typically gets tasks from two to three different projects that Netlife is currently working on. (Read up on why it makes sense to test with so few people.)

By testing two to three projects at a time and having the same person organize it, we can cut down on the time spent preparing and executing the test without cutting out the actual testing.

As a consultant, all I have to do is to let Are know a few days in advance that I need to test something. Usually, I will send a link to the live stream of the test to clients to let them know we’re testing and that they’re welcome to pop in and take a look. A bonus is that clients find it surprisingly rewarding to see other client’s tests and getting other client’s views on their own design (we don’t put competitors in the same test).

This has made it a lot easier to test work on short notice, and it has also reduced the time we have to spend on planning and executing tests.

Testing is designing

As I hope I’ve demonstrated, user testing doesn’t have to be expensive or time-consuming. So what stops us? Personally, I’ve met two big hurdles: building testing into projects to begin with and making a habit out of doing the work.

The critical first step is to make sure that some sort of user testing is part of the approved project plan. A project manager will look at the proposal and make sure we tick that off the list. Eventually, maybe your clients will come asking for it: “But wasn’t there supposed to be some testing in this project?”.

Second, you don’t have to ask for anyone’s permission to test. User testing improves not only the quality of our work, but also the communication within teams and with stakeholders. If you’re tasked with designing something, even if you have just a few days to do it, treat testing as a part of that design task. I’ve suggested a couple of ways to do that, even with limited time and funds, and I hope you’ll share even more tips, tricks, and tools in the comments.

About the Author

Ida Aalen is a senior UX designer at Netlife Research in Oslo, Norway. Thinking out loud is the only kind of thinking she knows, which you can tell from her tweets and talks. She’s happiest when she’s on a team with great visual designers and front-end developers.