Most people involved in product testing research aren’t looking to stroke their egos by intentionally designing biased studies that validate their ideas and preferences. They’re looking for the cold, hard truth so that their company doesn’t waste money on product ideas that won’t sell.
But biases still creep in somehow. Like bedbugs and bad habits, you might think you’ve gotten rid of them, only to find they’re still going strong when you least expect it.
In this article, we’ll discuss one of the most challenging and multifaceted biases—procedural bias—and how to overcome it with bulletproof study design and implementation.
Procedural bias refers to the ways in which your study design and implementation could potentially alter the responses your study participants give. Sometimes called “administration bias,” it could prevent you from getting fully authentic feedback from surveys, interviews, and the like.
Both researchers and study subjects can contribute to biased studies. Procedural bias is an example of responder bias, where the respondents are the ones giving inauthentic feedback. This is contrasted with researcher bias, where the people running the study unconsciously misinterpret results due to their own biases.
(NOTE: Researcher bias isn’t the same thing as research bias. Examples of researcher bias—also called experimenter bias—include confirmation bias, cultural bias, and halo effect. What is research bias, then? That’s the general term for any skewed results in a study, from any bias-related cause. Procedural bias is a subset of research bias.)
A good study design will minimize the potential for procedural bias in research by conducting choice-based surveys, open-ended interviews, and other study techniques according to best practices.
Product testing research is NOT a time for being told only the things you want to hear, and unfortunately that’s one of the common sequelae of a procedurally biased study.
Exaggerated favorability toward your product or concept could cause you to sink millions of dollars into a massive market failure. In the best of cases, it might just not sell well. In the worst of cases, you might have thousands of people on TikTok making videos about how your product is the dumbest thing they ever bought.
Of course, you can also have an exaggerated lack of favorability toward your product, but most subsets of procedural bias (interviewer effect, bias-inducing incentives) tend to overrepresent positive feelings. We’ll take a look at some of these in the next section.
It’s important to note that you’ll never eliminate biases completely, and you’ll probably always be missing something. But it’s on you and your product research team to minimize the impact or procedure bias as much as you can.
The procedural bias definition indicates that something about how you’re running your study causes subjects to leave important things out or respond inauthentically. How might your study have that level of mind-control over people? Plenty of ways, as it turns out.
The difference between quantitative (countable) and qualitative (unstructured) research results matters when discussing research bias. For example, selection bias is a much bigger deal for quantitative research because when you accidentally exclude a whole category of subjects from your study, your carefully calculated results are essentially bogus.
Qualitative research is more vulnerable to confirmation bias and other researcher bias examples. This is because there’s no cut-and-dried calculation steps to perform on qualitative data, and the researcher’s judgment can have a bigger impact.
For procedural bias, the effect is spread pretty evenly, and it depends on what aspect of the study implementation is causing the bias. Here are a few things that might impact quantitative data:
And a few for qualitative data:
We’ve touched on some of these in the sections above. But bias reduction is so vital to getting good study data that some redundancy is warranted.
Survey design is burgeoning research area in and of itself. Some easy ways to improve the authenticity of your survey results include:
There’s also the question of monadic vs. comparison testing. Monadic testing refers to showing a single product option to a given set of testers or survey-takers so that they can focus on just that one thing. This can reduce the bias caused by comparing one product to another, a common issue with comparison testing. For example, if your testing audience samples two types of soup and one is extremely spicy, it may be hard for them to evaluate the non-spicy soup on its own merits.
Keeping people unaware of your brand being behind the study is a great idea, since you might otherwise run into brand bias or sponsor bias (if you’re a well-known brand, you’ll have both haters and fans, and both will likely be biased). Working with a recruitment service and having your products blinded can help here.
When we think of research, we often see controlled environments as the gold standard. However, a lot of market research analysis depends on people being in their familiar, organic environments where things aren’t necessarily as clean and orchestrated.
In-home usage testing (IHUT) ensures that people are testing your products in a way that’s authentic to their lifestyle. Think of the difference between trying on shoes at Nordstrom back in the day (you’d walk a few meters in front of a store associate who’s actively trying to sell you the shoes) and actually wearing the shoes during your daily activities (it doesn’t always feel the same).
IHUT also provides a convenient way to conduct long-term studies so that you can capture a range of authentic behavior. You can send out multiple short surveys over time to minimize survey fatigue and get real-time data on consumer perception.
Again, it’s about as impossible to completely minimize bias in product testing research as it is to measure a quantum particle without altering it. But with excellent survey design, a setting that puts testers at ease, and a way to keep participants from knowing who’s behind the study, you can get reasonably unbiased data.
Highlight can help you recruit committed product testers whose survey and task completion rates average more than 90% (well above that of other product testing services) and minimize possible procedural bias in your study through survey best practices and product blinding. You’ll also get your data quickly, so you don’t have to spend much mental energy on product ideas that are duds.
Got a product to test? We’ll help you get the cold, hard truth you’re looking for.