Surveys and focus groups aren’t used much in our user-centred design process. These are the reasons why.

Caption: A user filling out a survey.

You can’t get authentic, actionable insights in a few clicks

Think about the last time you filled in a survey.

As you were filling in that survey, did you feel as though you were really, genuinely able to express to that organisation how you felt about the thing they were asking you? About the actual experiences you’ve had?

If the answer is no, you’re in good company. I ask this question a lot and the answer is always the same.

This is important to remember whenever you’re looking at research reports full of statistically significant graphs. Always make sure you are critically evaluating the quality of the research data you are looking at — no matter how large the sample size or whether it has been peer reviewed.

Also, when you are looking at research outcomes you should think about whether they help you understand what to do next. Surveys and other analytics can be good at telling us what is happening, but less good at telling us why. Understanding the why is critical for service design.

Government services have to work for everyone

As researchers, we have a pretty diverse toolkit of research techniques and it is important that we choose the right tools for the job at hand.

Surveys and focus groups are research techniques widely used in market research where we want to understand the size of a market and how to reach and attract them. But most of the time, designing government services is not like marketing.

Randomised control trials are widely used in behavioural economics to understand how best to influence behaviour in a desired direction. Most of the time, designing government services is not like behavioural economics.

The job that multi-disciplinary teams have to do when designing government services is simple but difficult. We need to make sure that the service works for the widest possible audience. Everyone who wants to use that digital government service should be able to.

When we achieve this level of usability in a government service we are more likely to achieve:

desired policy outcomes

increased compliance

reduced error rates

a better user experience for end-users.

It’s not about preference

Government services work when people understand what government wants them to do. Success also means they’re able to use the service as quickly and easily as possible without making errors. These are the outcomes that the user researcher needs to prioritise.

To achieve this we use observational research techniques and iterative processes that predate both the internet and computers — having their foundations in ergonomics and later in human computer interaction.

There are 3 important things our user researchers and their multi-disciplinary teams keep in mind as they do their work to understand whether services are usable and how the team might make them more usable:

We care about what makes the service work better for more people, more than we do about what people (either users or stakeholders) tell us what they prefer

We take an evidence-based approach to evaluating whether our design is working better to help people use the service

We know that the more opportunities we have to iterate (test and learn) the greater the chance we have of delivering a service that most people can understand and use.

Setting real-life tasks is more valuable than ‘tell us what you think’

We used task-based usability as one of the main research tools when we are evaluating the design of digital services and iterating to improve them in the Alpha, Beta and Live stages.

To do this we come up with examples of important tasks that people need to do to complete that service. For example we might ask them to register for a service and complete a registration form as if they were doing it for real.

When we are testing content, we might provide a real-life scenario that represents a question that people should be able to quickly and easily answer. Using a real-life scenario makes it easier for us to be sure that users are getting the right answer. The worst case scenario is when users think they have the right answer but are actually incorrect.

A scenario might be something like this:

Samantha is 41. She is a single mother of a 14-year-old boy.

The building company she worked for has recently gone out of business and she’s now working part-time at the local supermarket while looking for work.

How much can she earn each fortnight before her payment stops?

We can do task-based testing in a moderated environment. This is where the user researcher is in the room (or on a video conference) with the participant and asking them about how they are interpreting the design and information as they move through the task. This helps us understand what people are thinking and why they are making the decisions they do and let’s us understand how to improve the design to work better.

Task-based testing can also be done in an unmoderated environment. This is where the participant is left alone to do the tasks and we use software to measure how long it takes to complete. We also measure the pathways the user takes, whether they can accurately complete the task and their perception of the effort involved. This can help us to create a baseline for usability which we can try and improve upon.

Both of these approaches give the team valuable insights into how well a service is performing. But critically we also learn what we can do to make the service work better for users.

Of course there are times to use surveys and randomised control trials — no research method is in itself inherently bad. But if you’re in the business of designing government services and making them work better for users (which means better outcomes for government too) then you need to make sure you’re not automatically defaulting to research tools that don’t let you dig as deep as our users deserve.

Leisa Reichelt is the Service Design Lead at the Digital Transformation Agency.