Here’s a question I was asked recently:
What sources of information do you suggest folks building their product plans today rely on, so that they can get the evidence they need to build the product that will meet the needs of end users?
Read on for my answer:
The answer is always: it depends – I know it’s the ultimate cop-out – but it depends on what you’re trying to demonstrate. The sources of data that you need to have will be dictated by what it is you’re trying to demonstrate or prove.
So let’s try to make this a bit more tangible.
If, for instance, we wanted to figure out whether the product was allowing people to achieve a particular goal. Let’s pick an example of driving licences. If the user’s objective was to successfully apply for and receive their licence first time of asking, then our measurements would probably involve quantitative web analytics that show that people can move through that conversion funnel, from point of entry, through the process of applying for their driving licence, through to a successful outcome in that situation of receiving their driving licence at the end of the process.
You’d be able to use the web analytics to find out that the majority of people – whatever threshold or percentage you were looking for – were able to do that first time around. But in combination with that, you’d also need qualitative data as well, so I’d want to supplement that with feedback from usability tests. This is actually sitting watching people making this application, ideally in their own environment, at home or at work, or wherever they’re actually doing this for real, to see how easily they’re able to achieve that successful outcome, and learn what they’re finding easy to do, what they’re finding difficult, or what they don’t understand so much in the process.
I’d want to make sure that we’re not just getting to the desired end-result in our analytics, but also that people are getting to that result in a way that satisfies their needs, and that they’re happy that they understood what was going on. Because I wouldn’t want to get into that situation where users are reaching what I think is a good outcome, but purely by accident because they’re misunderstanding what’s happening throughout the process.
So I guess to summarise that slightly long-winded answer, you’re always going to need a mixture of quantitative and qualitative data. They’re going to come from a combination of your analytics, more verbatim feedback from users, and from your usability testing (and indeed from other places). You’re going to have this mixture of different sources, but you always need to have that balance of the “what” and the “why”, the quantitative and the qualitative.