How to run data-driven usability testing sessions

Every product manager and UX designer has different opinions and notions about how a design will work and how people will interact with the product. But if it hasn’t been tested or proven, it’s merely an opinion.

In today’s crowded, attention-grabbing product landscape, you can’t afford to be led by guessing. There’s too much competition, and people no longer tolerate a lack of functionality.

“On the Web, usability is a necessary condition for survival. If a website is difficult to use, people leave. If the homepage fails to clearly state what a company offers and what users can do on the site, people leave. If users get lost on a website, they leave. If a website’s information is hard to read or doesn’t answer users’ key questions, they leave. Note a pattern here?” –Jakob Nielsen

Nielsen claims that in order to compete in the jungle that is the internet, you have to provide the best experience for your users. And to do that, you have to understand their needs in the best, most intuitive way.

You can provide the solution to their problems as a result from testing their experience and constantly optimizing based on their feedback and usability.

Aren’t analytics enough?

There are a lot of analytics software and platforms that try to help you understand your users’ behavior. But as good as those tools might be, most of them focus on the user’s actions, not on the experience itself.

They reflect on decisions that users have made but miss out on the entire experience and decision-making process.

User testing, on the other hand, is 100% focused on the experience and process itself.

This is an effective approach, and it gives valuable insights that couldn’t be described with numbers. It’s a qualitative, “feelings”-based approach that helps you hone your intuition and have a better understanding of your users.

The intuitive, qualitative approach is very much in line with the idea of user experience and product design in a user-centered environment.

To improve the users’ experience, see users as individuals with unexpected behaviors and patterns, not as numbers on a statistical chart.

If we understand that both approaches play an important role in design and product management, we can then harness them to our advantage and fuse them.

Quantitative data is great for discovering trends, but quality data can give you a more in-depth look into what motivates users, which can help you uncover deeper reasoning for the numeric trend you are tracking.

Knowing how to analyze both types of data will empower you to get a better understanding of your users and of the LTV results.

Physical testing versus user session recording usability testing

The common approach to user testing has 3 main steps:

Find participants that fit your target audience

Get them to perform common tasks on your product

Observe them as they perform these tasks and gather insights on their usability and experience.

The idea behind these 3 steps is to:

Make sure you’re testing the right audience. Your product or website should fit your target audience—there’s no point in optimizing for people who aren’t relevant.

Provide a set of actions everyone has to do in order to find patterns or general notions

Learn and analyze based on what they’ve done

Every user testing method goes through these steps.

In terms of how you test, there are 2 popular ways:

Physical user testing

Digital user testing (user session recordings)

A physical usability test is based on you meeting your existing or potential users. Observe them as they go through a set of predefined tasks and learn from your observations.

A digital usability test will be based on using user session recordings. You won’t meet your users, but you’ll be able to see their actions via the video sessions recorded.

A user session recording allows you to watch users as they interact with your product. It differs from traditional user testing since you can’t interact with the user and give them instructions. However, it has a big advantage: The user is unaware, so there’s no distortion of their behavior.

Now let’s talk about how to use data to conduct user testing in both methods.

The data-driven approach to running your usability test

Data-driven user testing is similar to the 2 approaches mentioned above, but now we’ll choose the users based on data.

Find users who fit your target audience
The first step is finding the right user to run your test.

Whether you chose the first or second approach, you first need to create a very comprehensive persona.

By knowing who you want to target, you can segment your audience on both user recording tools and analytics tools in a meaningful way. Of course, you can use analytics to refine your persona, and this is an iterative process of optimization between your persona and your on-site data.

Once you have the persona, start using it. Create a segment based on age, location, gender, interest, online behavior, marital status, etc.

Now it’s time to choose the right people for your next usability test.

For physical testing
Dig into your data and segment your users based on LTV (lifetime value) and their match as “good users.” The behavioral segmentation can be done by loyal users, or by users who renewed their subscription.

Moreover, you can also choose users who signed up and didn’t convert from free to paying users (if you’re a freemium product) in order to understand why they didn’t convert.

Use your analytics tool to filter those qualified users and then learn more about them. You can dig even deeper and segment by demographics, and so on.

Now you’re able to invite people to run your usability tests based on results matching your persona with actual users segmented from your analytics.

For user recording sessions
The process here is reversed. If you’re using session recordings, it means you already have your tests recorded. You now have to segment the sessions you want to review based on the persona you’ve created.

Running the test

You have to reduce your interaction to a minimum and do the best that you can so you won’t interfere with how your user behaves.

You can take 2 approaches to running the test:

Create a set of predefined tasks a user needs to follow through, then observe how they perform those tasks and spot potential friction points

Give them a general direction of what they should do and watch them behave within the platform. The good thing here is that you can direct them towards the area/feature you would like to test. The bad thing is that your users will try to perform for you, so they’ll be driven unnaturally.

If you’re going for the predefined tasks, use the funnels in the digital tools to create your user testing scenario. By doing so, you’ll make sure you’re watching them as they’re running through the most common flows of your product, thus focusing on the most important aspects.

You can also add a scenario which you’d want to see more of, but isn’t shown on your data. This might show you the reason for why this scenario isn’t happening in real world.

In both cases you’ll need to run the test several times with many different samples, which can be time consuming.

For user recording sessions
Running the test with user recording sessions is easier since you aren’t really conducting a test, but reviewing existing behaviors and segmenting them by specific needs.

If there’s a certain area in your product that you’d like to test, you just have to look back at the recordings that show the users who engaged with that area.

If some follow-up questions arise, you can view additional sessions and other interactions to get the full picture.

Simply go into your user session recording platform and filter users based on either URLs or specific CTAs. You can even segment them by sources to test the different behaviors on your product based on source—something that you can’t do with physical user testing.

Analyzing your results

When it comes to analyzing your tests, there’s no difference between the 2 methods since the focus is on the measuring and result-gathering method from observing how a user behaves.

Turning qualitative results into insights can be done 2 ways:

Numeric: Quantifying different parameters

Depth: How frustrating or how great was the experience you’ve witnessed?

Numeric
Create ready-made criteria to fill in as your users perform and give each action a point based on difficulty and friction. This means you’ll be able to quantify their experience by adding the points altogether.

If you’re looking for a system usability scale (SUS) to evaluate the usability of your website, try this one. But I recommend that you build your own scale based on your needs and priorities. You can also read Lean Analytics for an in-depth explanation on how to create your criteria and measurement system.

Depth
This system is preserved only for the situation where you can conduct user interviews with your usability testing. You can do it by interviewing your users after the physical test, or by contacting your users based on a user session recording that fits your needs.

You want to see how passionate or frustrated your users are by a certain design or product feature. Sometimes they can handle the current flow but later you’ll find out they actually hate it. Maybe they’re passionate for a flow that didn’t make sense to you, but they love it.

It’s not just about “how many people do X”—it’s also about the volume of the sentiment.

Summary

In a user-centered world, optimizing your product based on the user’s experience and feedback is a necessity. Use data in your user testing and you’ll be making smarter decisions, focusing your questions, and constantly optimizing based on results.

How do you conduct user testing? Which platform are you using? Tell us about it on Twitter: @InVisionApp.

Author

Danni FriedlandDescription: Danni is the CEO of Jaco analytics, a company that aims to change the way companies analyze and understand their users. Danni’s a hacker at heart, whether it's hardware or software—he loves tinkering and making things into a reality. He has a vast experience bootstrapping a project from the grounds up and scaling it to industrial size, both as a programmer and as a team leader.