How to Perform Your Own Lean Mobile Usability Testing

Usability testing doesn't have to be done in labs, doesn't require experts, and doesn't have to be expensive.

Article No :1456 | June 24, 2015 | by Greg Nudelman

You have a great idea for an app or a new feature for your responsive website. Yet your company doesn’t have the bandwidth or research budget to test it through “official channels,” or worse, your client doesn’t believe in testing altogether.

But don’t give up! You can get all of the tremendous benefits of customer feedback by doing your own lean mobile usability testing with 10-15 potential customers per day for about $20. This article will show you how.

Over 15 years ago, Steve Krug wrote Don't Make Me Think!—a seminal book on usability testing in which he stated, “I believe strongly that everyone … can—and should—be doing their own testing.” Yet to this day, many teams don’t do their own testing, test too late in the design process, or release most features without doing user testing at all.

I believe the reasons for this avoidance are the three insidious “usability testing myths”—myths you must deal with now if your lean usability testing is to be effective:

Myth 1: Mobile Usability Testing Must Be Done in a Lab

In my experience, nothing is further from the truth. A lab is an artificial environment where cameras and various observation doo-dads severely affect posture, ergonomics, and behavior of the participant.

You can usually get much better data and more honest feedback by speaking with “free-range” customers in their own natural habitat by speaking directly with willing participants where they naturally use their mobile devices: hospital waiting rooms, bus and metro stops, or any other locale particularly appropriate to the use-case your app specifically meant to address.

My personal favorite are morning lines in the coffee shops where people are bored, decaffeinated, and short-tempered, and they will gladly give you brutally honest feedback on your idea in exchange for the price of a cup of coffee. As a bonus, you are by necessity time-constrained to the three to five minutes they take to reach the coffee counter—a fairly typical mobile use-case.

Myth 2: You Need a Real Nice Prototype Running on a Mobile Device

Not true. As I wrote in The $1 Prototype, "the level of completion of your prototype should accurately reflect the level of completion of your project." That means you don’t need a slick prototype if you are still struggling with the question of “is this the right product for the marketplace?” or trying to nail your information architecture and copy.

In my experience, sticking (ahem) with simple sticky-notes prototypes drawn in pencil for the majority of your early mobile testing yields tremendous savings in time and effort. That’s because with sticky-notes prototypes, your design and your prototype are one and the same. And the output of your design can be directly taken into testing:

Without trying to sound too pompous, a sticky-notes prototype allows you to iterate your design practically at the speed of thought. Any issues you find in your testing can be fixed immediately right in the field with a pencil and an eraser as soon as they are identified.

The additional significant benefit of not having slick finished-looking prototypes is that your participants feel free to contribute ideas and are much more likely to suggest fundamental improvements to your flows—the very reason you are doing your usability testing.

And finally, let’s address the most damaging myth of them all:

Myth 3: You Need a Professional Usability Researcher to Run User Tests

Over 3000 years ago, people believed that the priestess of Oracle of Delphi was the only person who could deliver enigmatic prophecies from the mouth of Apollo. Today, many people believe a similar myth: that only a professional usability researcher can deliver the straight dope from the mouth of customers.

Don’t get me wrong: having a professional usability researcher on your team is just fantastic. However, limiting your testing to a few hours done through this one “official channel” is contrary to the entire spirit of user experience work. The entire product team is responsible for the experience your customers will have with your product.

Just observe a great number of poorly designed, irritating products. Or take a look at the #1 reason most startups fail: they make products no one wants. Even a minimum investment into simple user research could help avoid many of these issues.

It’s true that some UX teams have separate designer and user testing roles. However, in most highly functional UX teams, the roles tend to be much more intertwined: everyone fully participates in user research to the best of their ability. Furthermore, user research is not treated as a separate activity to be undertaken only by the exalted usability researchers, but instead, it becomes the center of the design process and the focus of the entire team.

Which means you do not need to wait for a Delphic Oracle to test your designs. You can do it yourself, today. In fact, I recommend you jump into testing as soon as you have as little as two to three screens rendered as sticky notes, and build up the rest of the screens as you get feedback from your potential customers.

5 Lean Mobile Usability Testing Tips to Start You Off Right

Now that we’ve dispelled some of the biggest myths surrounding mobile usability testing, let’s see how we can run an effective mobile usability test.

1. Focus on human connection

While interviewing is simple, it is by no means easy. When I tried to interview people at the start of my UX career, I felt awkward and completely out of my element, like the Incredible Hulk trying to make flower arrangements.

But I persisted, and I got better. You can too.

Interviewing is a serious activity that requires homework, dedication, and lots of practice. But the important thing that will get you through most interviewing challenges is your empathy and human connection with the participant.

Once you learn to approach user testing as human connection first and everything else second, the entire process falls into place. The rest is just details. And details get better with practice.

2. Let them fail

No matter how good your product is, some people still may not get it. And if we’re talking about one of your early designs, moments of confusion and frustration are virtually guaranteed.

One of the essential skills to acquire is simply letting your participants fail and learning from their experiences.

At the beginning of the test, explain the task in as few words as possible, and then sit back and observe. Let them figure out how to do the task on their own using your interface. Just relax and watch carefully. Connect. Stay in the moment.

Don’t give participants any hints unless they become completely stuck, and even then give as little information as possible. An effective technique is to answer their question with a question of your own.

3. Answer questions with questions

Participant interviews get particularly challenging when participants express their confusion in the form of a question. For example, “What do I do next?” or, “What is this thing here?” Many beginner user researchers get caught in the trap of politeness and answer the question, thereby inadvertently leading the participant.

It’s impolite and socially awkward to ignore other people’s questions, so you must be ready to answer every question from a participant with a question of your own. For example, a question like, “What do I do next?” can be answered with a standard response, “What do you think?”

It is even more effective to refocus the participant on the task at hand by emphasizing “action” and “doing” with an answer such as, “What would you do if I weren’t here?” Note the subtle emphasis on the word “do”—it’s part of the interviewer’s “verbal ninjitsu” that helps guide the participant in the desired direction. Because what people actually do is much more valuable than what they say.

4. Focus on behaviors, not opinions

How your participants behave during the test will almost always be much more valuable than any opinions they offer. That’s because behavioral data is highly consistent.

If one participant gets confused about the screen title and “accidentally” taps the wrong button, chances are that thousands of other people of the similar age, experience, and education will do likewise. This behavioral consistency is what makes usability testing worthwhile.

On the other hand, opinions—statements of general preference such as, “I like this app,” or, “I don’t like blue color”—often differ wildly from person to person. This makes statements of opinion unreliable as a source of information when usability testing with a small number of participants.

To get the most out of your test, carefully observe how the participant performs the task. After the participant completes the task, follow up with valuable questions to help you further understand their behavior.

5. Ask valuable questions

As an interviewer, asking valuable questions is a critical skill to develop and practice. It’s also one of the hardest skills to master. Fortunately, the concept is fairly easy to grasp—all you need is patience and practice.

Below are some typical questions you might ask in an interview. These questions can be imagined as a scale starting with the most leading and ending with the less leading (and, therefore, more valuable) alternatives:

Asking a leading question such as “Why did you like this app?” can be great for boosting your ego, but the answer you typically get carries almost no weight because it’s pure speculation. In the end, you don’t know if the participant would download your app, pay for it, or use it if they do download it.

Instead, strive to ask non-leading, valuable questions focused on potentially significant actions that will help reveal critical behavioral information. Start with general questions such as, “How was your experience?” and move to specifics such as, “How was your experience with entering payment information?” when you need additional feedback about the specific feature.

A particularly valuable mobile user research tool is the magic question, “How much would you pay for this app?” This question helps you gauge the level of affinity for your product from potential customers. Price is a sensitive issue, and asking this question often reveals all kinds of useful details that are hard to get otherwise.

To get a better idea of how all this works in practice, you can watch videos of graduate students from my university courses practicing mobile usability testing at my $1 Prototype Book YouTube Channel.

Take Action

You now have all the basic tools you need to test your mobile app or website. Let go of any unproductive usability myths that hold you back. Instead, take action: right now invest just 15 minutes and sketch a few screens on some sticky notes to create your mobile prototype. Then make a plan to do a few short usability sessions at the coffee shop the next morning—after you enjoy that extra-shot vanilla latte.

Don’t stress; just enjoy talking to people about your project for a few minutes. Accept the fact that you will make some mistakes, and focus instead on making a human connection with each participant. Let your empathy and enthusiasm guide you, and keep practicing—you will be well on your way to enjoying a wellspring of incredible insights and fantastic ideas that come from speaking with your customers.

Comments

yeah, just asking a few people in a coffee shop, and let them play around with your app will give you so much insights. In particular for consumer products I would always do this.

And yes with the leading questions, phrasing your questions right is so important. You are not marketing your apps, you need to leave space for the tester to form their opinion. You are looking for things to improve, not being praised.

Laxman

June 26, 2015

Excellent tips. "The entire team is responsible" ... i know in what context it has been told, however this may be bit misleading... ! When the job is outsourced the project team who has no clue about the end user audience culture would certainly messup. In that case a Usability Tester who advocates the user atleast put efforts to understand the culture of the client and audience, would certainly put in to the shoes of audience, and could decide better, whereas the other project members do not really worry about the user problem !!