User Testing

Before your organization invests in building a new technology or service, user testing can help you verify if it will be valuable enough to develop. There are multiple methods to use:

Feedback Interview: show your new design prototype to a stakeholder and interview them about how usable, useful, and engaging it is.

Over-the-shoulder observation: give them the prototype and watch as they try to use it. Note down breakdowns confusions, and payoffs. You can possibly give them a persona card to help them understand what POV they are using it from.

Survey instruments: have the tester fill out a short survey, usually with Likert scale responses of 1-7 (levels of agreement). The questions can draw from surveys around Usability, Design for Dignity, and Procedural Justice.

Comprehension Testing: have people use the design prototype, and then after they are done, give them a quiz to measure how much of the important content they have understood and retained.

Idea book: make concept posters or other high-level presentations of your various ideas or features. Put them in a single book, like a catalog. Have the testers look through and rank which of the ideas they’d like and why.

Priority Sort: have people look at a wide variety of high-level ideas and judge their relative value. They will put them in buckets, spending pretend money on the ideas.

Read up

Here are articles we’ve published at Legal Design Lab that describe the methods we use to human-centered design and testing for access to justice.

Practical Short Book on User Testing New Ideas for justice innovation

Our team wrote this short book, User Testing New Ideas, to walk through exactly how we ran user testing for new traffic court-oriented redesigns. We captured the steps, tools, and ethical considerations we took when doing early-stage testing of new prototypes.

::::

Participatory Design methods for evaluating new justice innovations

Most access-to-justice technologies are designed by lawyers and reflect lawyers’ perspectives on what people need. Most of these technologies do not fulfill their promise because the people they are designed to serve do not use them. Participatory design, which was developed in Scandinavia as a process for creating better software, brings end users and other stakeholders into the design process to help decide what problems need to be solved and how. Work at the Stanford Legal Design Lab highlights new insights about what tools can provide the assistance that people actually need, and about where and how they are likely to access and use those tools. These participatory design models lead to more effective innovation and greater community engagement with courts and the legal system.

How can the court system be made more navigable and comprehensible to unrepresented laypeople trying to use it to solve their family, housing, debt, employment, or other life problems? This Article chronicles human-centered design work to generate solutions to this fundamental challenge of access to justice. It presents a new methodology: human-centered design research that can identify key opportunity areas for interventions, user requirements for interventions, and a shortlist of vetted ideas for interventions. This research presents both the methodology and these “design deliverables” based on work with California state courts’ Self Help Centers. It identifies seven key areas for courts to improve their usability, and, in each area, proposes a range of new interventions that emerged from the class’s design work. This research lays the groundwork for pilots and randomized control trials, with its proposed hypotheses and prototypes for new interventions, that can be piloted, evaluated, and — ideally — have a practical effect on how comprehensible, navigable, and efficient the civil court system is.

::::

Ethical Design Engagement with Your Community

This short book from 2017 encapsulates some of the design training that we give to our students before they go into the field to conduct interviews or testing with members of the community.

::::::

We will be adding in more details on testing methods. For now, find some of our write-ups here, and please feel free to add more!

:::::::

Methods

Usability/Dignity Evaluation Instrument

When we ask people for short feedback to our new technology offerings, service designs, or information design, we use an evaluation instrument that we’ve created. It’s a short survey evaluation that incorporates assessments from established survey instruments to evaluate software’s usability, to get citizens’ feedback on government services, and to assess people’s sense of procedural justice and dignity while using an offering.

For each of these questions, we use a Likert scale, of 0 (Disagree Strongly) to 7 (Agree Strongly).

I think that I would like to use this system often to help me [insert objective: communicate with the court, navigate court process, etc.]

I thought the [design name] was easy to use.

I felt very confident using the [design name].

This will help me to get through court more efficiently.

This gave me clear, helpful information.

I felt that I was understood using the tablet’s translations.

I wish I could take [design name] around [place/system name] with me.

I felt the [design name] provided most of the information I was looking for.

I felt that the [design name] could be improved.

Ranking high-level ideas against each other: Priority Ranking

Use Priority Ranking this to get a large number of stakeholders’ feedback about which ideas should move forward to the agenda. Have the ideas on cards, and the group must come to consensus about which of the High-Medium-Low-No value categories each idea should be placed into.

In addition, you might give the group a ‘persona card’, so they know whose point of view they are looking at the ranking through.

Personas to give people to play

Often in very early stage testing we have people test from a different person’s perspective. We give them ‘persona’s to play, so that they scrutinize the design from these various points of view. We know that they are not as good as having a wide range of people from these different backgrounds, but it is a test-run of this — to see what issues we can spot with a design before investing in wider testing.

Here are some example personas that we give to people:

Persona 1: 22 year old digital native, very confident in technology, prefers to text over phone calls and sometimes even over in person communication, feels higher confidence in their ability to figure things out especially using Google and looking through social media, but feels relatively out of their depth in the legal system

Persona 2: 65 year old, who is a first time user of a legal system, but has dealt with lots of other complex social systems like with health insurance, social security, taxes, etc. They are definitely not very confident with technology, but do email a lot, still uses AOL, just moved to the most basic smartphone this year a upon the insistence of their kids.

Persona 3: 42 year old who has been to court several times to deal with divorce, custody, and parenting plans. They have had enough repeat visits to feel confident about how to navigate the system and the relationships. They feel literate, but still want support to get things right

Persona 4: 31 year old who has very limited English proficiency. They have been through immigration proceedings with the help of family and friends before, but they definitely don’t feel confident in going to court by themselves because of the language and because of the unfamiliarity of the system.

Persona 5: 18 year old who is coming with their older family member to help translate for them in court. They are little in English, and feel confident with technology. But they are not familiar with the legal system at all. They grew up in the US, and feel they can also help with the cultural translation for their family members