I was recently asked this question:
Do you have a rule of thumb for deciding how much evidence is sufficient for decision-making? I often see decisions made on the basis of a sample of one.
Read on for my answer:
Thanks for the question. I don’t have a general rule of thumb for sample sizes for user research as it depends on a few things, including:
- What level of accuracy you’re looking for from your research (statistical rigour)
- How large your target user persona segments are
- What kinds of question you’re asking
If you were doing customer discovery early on to check your assumptions, Steve Blank recommends to speak to 100 people (as a good starting point). These are more open-ended conversations in-person, centred around the problem you’re trying to solve, rather than sending out surveys, which are a poor way to conduct discovery. He has some great tips on the kinds of questions you should be asking in a handy set of bite-size videos.
Surveys have their place, but should be used later on down the line after discovery. In discovery you don’t even know if you’re asking the right questions yet because of your level of uncertainty. When you’ve got a better handle on the problem space and the users’ needs, and so have more specific questions (which you now know are the right questions to ask), this is when surveys can be used with caution to avoid introducing bias.
The Government Digital Service (GDS) suggests that most nationally representative surveys (in the UK, ~65 million inhabitants) would involve 1,000 people or more, but can be significantly larger if there are a number of distinct user groups that are being targeted.
They would typically use a surveying company like YouGov to do most of the heavy lifting, but it’s rare for products outside of the public sector to target an entire national population.
Usability testing #
For usability testing, I’d want to test with a minimum of 5 people per session from each user segment. Jacob Nielsen says that you uncover ~84% of your usability issues when you test with 5 people, and that you’d need to test with at least 15 people to uncover all the issues.
For their federated identity verification service, GOV.UK Verify, GDS has carried out more than 100 rounds of usability testing with over 600 users. Given that Verify is meant to be used as a component in many of the other government digital services, it’s understandable they’d want to keep testing with as many users as possible to ensure it meets the needs, not just of the integrator users, but of the end-users also.
User exposure hours #
Then for maintaining your team’s ongoing understanding, Jared Spool recommends everyone on the team (even the ones that don’t like talking to people) spend at least two hours every six weeks with users.
This could be actually talking to users, or at the very least observing usability tests or similar. It’s really important for the team to be reminded that their users are people with real needs and problems. It also gives them the necessary first-hand perspective on those needs and problems, rather than receiving everything second-hand from a user research, product manager or another intermediary.
I hope all that helps. Let me know if I can help further.