Writing

The Mistake Almost Everyone Makes When Doing User Research

I’ve shared previous versions of this idea publicly in a number of forums, including Designers & Geeks (2018), UXRConf (2019), Good Research (2022), User Research London (2022), TCV Engage (2022) and PSL’s Growth Shop (2023). It also underpins the User Insights for Product Decisions program that I built with Reforge. I’m sharing it in this form to have a public reference in a more readable format since speaker notes are not ideal for consumption.

The title of this post is taken from a lesson that I’ve shared with the dozens of companies I’ve had a chance to work with over the past few years, as well as the 1000+ people I’ve taught through Reforge.

So many people I talk to, both researchers and people who do research, tell me that “good research is about learning,” and while I think that’s half true, I think it’s the less important half of the sentence.

Good research is about learning in service of making decisions. You are not learning for the sake of learning, you are learning so that we can do something, take some kind of action. 

The mistake that you’re probably making is when it comes to user research is that you are focusing on learning, rather than decision-making, as the ideal outcome of the work. 

This sounds like a pretty small, semantic difference, and I imagine some of you are feeling a little bit let down, but let me walk through how this plays out, because it’s a mistake I’ve watched people make hundreds of times in my career and resulted in bad products, wasted hours of customer time, and bloated research teams. 

What this mistake looks like

Let’s imagine that we are a small start up and we’re seeing churn. We might want to figure out why users churn, so we decide we should talk to some users who aren’t happy and figure out why they’re unhappy so that we can get ahead of churn issues. 

Deciding to interview these customers will help us gather some kind of evidence, which may (or may not honestly) help us make any decisions about what to do about churn. 

Yet, this is what most people do. They pick a research approach they know, try to gather some evidence, and hope that it ultimately helps them do something, like make a decision. But this is often misguided, if not very wasteful. I call this “decision-last” research.

From my perspective, there are at least three problems with this approach:

  1. You choose the wrong research approach.

  2. You gather the wrong data.

  3. You don’t involve the necessary stakeholders.

First, you could be taking the wrong approach. Most people I talk to choose methods out of convenience, rather than conviction. They think interviews mean talking to 5-6 people or a survey can be sent to anyone without thinking about population sizes for statistical significance. Are interviews the best approach here? It’s unclear at this point, and we’ll get to why in a moment. 

Second, even if you did correctly pick an approach that makes sense (i.e. interviews), it’s possible you picked the wrong people to interview or the wrong things to talk about. You run the risk of gathering data that doesn't help you make your decision,  which is a waste of time and resources, and of your users' time and goodwill. Bad inputs lead to bad outputs, and we’re trending in that direction here.

The third problem with a decision-last approach is not involving the right stakeholders. Any decision we make requires the buy-in or approval of multiple stakeholders. Without clarity on the decision, we won't involve the right stakeholders in the research process. This means we have to work harder to communicate our insights and get their buy-in to make the decision, assuming that the evidence we gather is even the right evidence to inform the decision at all. 

So if this is what people normally do, what’s a better way to do this?

Decision-First Research

Work backwards. You should be decision-first when you’re doing research. 

Starting with the decision forces clarity about the boundaries of the decision you’re making. You have to talk about what’s possible or not possible, about what you’re willing or not willing to do, and you have this conversation up front, before you spend any customer time on anything.

This doesn’t mean that research is only meant to validate an idea. You could be deciding “How should we best support [Customer Segment]?” which encourages you to understand that segment, their behaviors and needs, and identify the different ways you may be able to address those needs or solve their problems.

Once you have clarity on the decision you’re trying to make, you can focus your attention on the evidence you need to make that decision. This step is sort of like being on Wheel of Fortune, and agreeing on at what point you’re going to try to solve the puzzle. 

In some cases, you probably feel okay solving from here. But if you don’t, then you have to have the conversation about the cost/benefit of getting more letters on the board. Except in the world of building products, each letter is the outcome of an experiment, a customer conversation, etc that gives you more confidence or reduces the risk of your decision. 

This conversation about evidence is really valuable because it forces you to agree on where you’re at in terms of making the decision and what you need to feel comfortable moving forward. From there you can be explicit about the tradeoffs you’re willing or unwilling to make to increase your confidence — is it possible or worth it in the time that we have left? If so, what evidence would we need? Do you need to hear people say something, watch people do a thing, etc. And who are those people? 

From there, you can actually scope research to go and gather the missing evidence. We’re no longer dealing with the issues of the wrong approach, the wrong data, or the wrong stakeholders.

Let’s go back to the churn example, if our decision-last approach was to ask “Why are people churning?” then our decision-first approach is to be clear that the decision we’re making is likely “What we should prioritize to improve retention with our target audience?”

This is bounded, both in terms of what we should do (of the things we can), the fact that we’re asking to prioritize it, which forces us to have a conversation about prioritization, and our target audience, so we know we don’t need to focus on adjacent users, for now. 

In fact, I imagine many of you are already pretty good at being decision-first when it comes to experiments. While we talk about experiments as being hypothesis driven (and they are) experiments are one of the ways we gather evidence to make decisions. Specifically good experiments allow us to decide whether or not a specific intervention drives a specific outcome. 

But somehow when we get outside of the world of experimentation, especially into the seemingly “squishy” world of qualitative research, all rigor goes out the window, and we just focus on “learning things.” But it doesn’t have to be this way, and it shouldn’t. 

How do you get from a learning goal to a decision statement?

I imagine many of the researchers reading this get asked to go learn “why are people churning?” and other similar questions, and you want to know how I got from there to the decisions statement in the image above. The process is very straightforward and mostly involves the question “why?”

When people come to me and say “Why are people churning?” I ask them “Why do you want to know?” They’ll often say something like “we want to figure out why customers are unsatisfied” and then again I’ll ask “why?” I don’t do this because I want unsatisfied customers, but because I want to know what _they will be doing_ with this information. 

They might say something like “we want to address the source of their dissatisfaction” to which I’ll again ask… “why?” Here we finally get to something that looks like “we want to improve our retention.” This allows me to start to talk about the constraints of the decision – what are we willing to do to improve retention? Who do we care about retaining? And so on. 

Note: I hope that this (small) shift helps you in your practice. Feel free to send feedback and questions via email or on Twitter.

Behzod Sirjani