The premise behind the recommendation is that 5 participants will discover 85% of the usability issues. Over 3 rounds of testing, you should pretty much catch all of them with a manageable number of participants.

He suggests that if you want to discover 100% of the issues (yeah right!) then you'll need to test with 15 users. I've seen little published elsewhere to disprove this.

We recruit 5-8 participate, over two days of testing. (Some people don't show not matter how much you pay.)

n.b. as Steve Krug says in Don't make me think. Testing one target user is better than testing none at all.

We have moved a long way since the simplistic rule of thumb that 5 users is enough. A very good argument in why 10 is not enough is Woolrych and Cockton 2001. They point out an issue in Nielsen's formula (1-(1-0.31)^5) in that he does not take into account the visibility of an issue. They show using only 5 users can significantly under-count even significant usability issues.

The number of users you need is dependent on how many issues there are, the cultural variance of your user base, and the margin of error you are happy with. Testing 5 users (or even 10) is not enough on a modern well-designed web site.

For example, assume that designers of a web site have been using good design principles and therefore an issue only effects 2.5% of users. Then 10 users in a test will only discover that issue 22% of the time. If your site attracts a 1 million visitors a year, 25,000 people will experience problems.

The easy way to think of a Usability Test is as a treasure hunt. If the treasure is very obvious then you will need fewer people, if less obvious then you will need more people. If you increase the area of the hunt, then you will need more people.

For most of the advocates of only testing 5 to 10 users, their experience comes from one country. Behaviour changes significantly country by country, even in Western Europe. See my blog post here :

You may also want to look at the margin of error for the test that you are doing.

The question of how many users is often being asked. Five users is often the answer, but as with anything in consulting and behavioural research, it all depends. Depends on what? The objectives of the study, the research question(s), the session's paradigm, distinctive user types and more.

The critical thing to consider is exactly what you are trying to get out of the testing and to design the test and the number of participants to get the most appropriate (and cost effective) data back to inform decisions.

Yes, I know, it's one of those "it depends" answers, but things that affect the number of participants to test with include:

the test material (e.g. single page/dialogue, process, whole site/system) which in turn can affect the likely time that a participant will spend on the task. (Keep it as short and realistic as possible. Often the longer sessions are the fewer their are; also the quality and complexity of the material how much likely signal-to-noise is there?)

The tasks which are ideally realistic and wholly representative rather than a long unrepresentative mechanical list. If you are running shorter sessions then you can probably fit in more of them in a realisic time (and you tend to get more out of many shorter sessions than flogging a small number of poor participants to death)

the target participants (and how you can get hold of them) some people are harder to find (and often more expensive) some you can literally drag off the street. You tend to want to get your money's worth out of difficult pre-recruited participants so tests are often longer, and there are fewer of them

where in the process you are and what questions you need answering (This is CRITICAL!) which leads to fundamental changes in the design e.g. fast iterative improvement cycles, exploratory insight/observation, measurement and benchmarking, statistically valid evidence, etc. etc. etc.

The budget and time available (where 'ideal' meets reality — some is better than none and 'small and often' generally better than massive)

There are of course more condiderations...but essentially look at what you need out of the testing and design accordingly and practically — the disputed equations do not take all factors into account...

I try and get about 12 participants scheduled since, like Matt said, some won't show up. On average I get about 8-10 to go through the test.

One reason I like to have a few more people then the recommended 5 is because in a lot of the tests I have 2 different prototypes. I'll have half the participants run tasks against version A first then I'll show Version B. The second half, I'll have participants run the same tasks against version B first, then version A. I'm able to get enough data on both prototypes while seeing which one performs better.

Nielsen's rule of thumb is 5 people for a qualitative user test, but that assumes research is being conducted on one demographic or persona type. If you are targeting multiple persona types, then you might have a much larger sample size.

How many participants you have in a study really depends on the purpose and objectives of the study. If you want to test your wireframes to see whether people can manage some of the basic tasks required of them, then sometimes 5 participants can be enough (however our preference has always been to have 8 as a minimum).

If the purpose of the study is to do a benchmarking exercise where you are, say, comparing the wireframes to the existing website, you will need more than 5 participants. For this type of exercise a quantitative usability study would be more appropriate.

I normally try to get around 5-10 participants for a test based on the actual tasks. But that is only good to correct the usability issues in your design and is not enough if you want to compare two different design candidates for example.

But my rule of thumb is that I continue the tests as long as I keep discovering new issues. Sometimes it takes more sessions than previously anticipated, sometimes less.

Ive just completed a round of testing with 12 people, we made a few tweaks during the course of the week so couldnt have done it with less.

We've got another round planned in a few weeks time, I want about 20 users for that one as it will be more detailed and includes their ability to comprehend content not just use the interface.

For a very large complex site with a segmented audience groups you can usually get away with 8-10 per segment.

I always find though that when you're about 75% the way through the sessions you will have captured most of the things that round of testing will uncover.

All this applies to face-to-face qualitative testing. If you want quantitative results, or if you're doing remote testing, heatmap analysis, goal and pathway tests, A/B tests etc then there is no ideal number, it just needs to be a large one :)