• Platform
  • Solutions
  • Resources
  • Become a Tester
  • Company
Why Highlight What's Included
The Highlight platform

Highlight makes everyone a research pro

Learn more
Highlight new platform for GA

Highlight makes everyone a research pro

Learn more
Customer Login BOOK A DEMO

Getting the whole truth: How to avoid procedural bias in product testing research

In this blog:

Most people involved in product testing research aren’t looking to stroke their egos by intentionally designing biased studies that validate their ideas and preferences. They’re looking for the cold, hard truth so that their company doesn’t waste money on product ideas that won’t sell.

But biases still creep in somehow. Like bedbugs and bad habits, you might think you’ve gotten rid of them, only to find they’re still going strong when you least expect it.

In this article, we’ll discuss one of the most challenging and multifaceted biases—procedural bias—and how to overcome it with bulletproof study design and implementation.

What is procedural bias?

Procedural bias refers to the ways in which your study design and implementation could potentially alter the responses your study participants give. Sometimes called “administration bias,” it could prevent you from getting fully authentic feedback from surveys, interviews, and the like.

Both researchers and study subjects can contribute to biased studies. Procedural bias is an example of responder bias, where the respondents are the ones giving inauthentic feedback. This is contrasted with researcher bias, where the people running the study unconsciously misinterpret results due to their own biases.

(NOTE: Researcher bias isn’t the same thing as research bias. Examples of researcher bias—also called experimenter bias—include confirmation bias, cultural bias, and halo effect. What is research bias, then? That’s the general term for any skewed results in a study, from any bias-related cause. Procedural bias is a subset of research bias.)

A good study design will minimize the potential for procedural bias in research by conducting choice-based surveys, open-ended interviews, and other study techniques according to best practices.

How procedural bias can derail your product launch strategy

Product testing research is NOT a time for being told only the things you want to hear, and unfortunately that’s one of the common sequelae of a procedurally biased study.

Exaggerated favorability toward your product or concept could cause you to sink millions of dollars into a massive market failure. In the best of cases, it might just not sell well. In the worst of cases, you might have thousands of people on TikTok making videos about how your product is the dumbest thing they ever bought.

Of course, you can also have an exaggerated lack of favorability toward your product, but most subsets of procedural bias (interviewer effect, bias-inducing incentives) tend to overrepresent positive feelings. We’ll take a look at some of these in the next section.

It’s important to note that you’ll never eliminate biases completely, and you’ll probably always be missing something. But it’s on you and your product research team to minimize the impact or procedure bias as much as you can.

Where does procedural bias show up?

The procedural bias definition indicates that something about how you’re running your study causes subjects to leave important things out or respond inauthentically. How might your study have that level of mind-control over people? Plenty of ways, as it turns out.

  • Study setting. Highly controlled or unfamiliar environments may put people in a headspace where they’re less likely to tell the whole truth.
  • Time allotted for the study. If you’ve got too many questions and too little time, people aren’t going to put much thought into it. With the clock ticking, their answers might get less tethered to reality.
  • Types of tools used. Are you administrating your study with the help of a sophisticated online interface? This could exclude people who aren’t particularly tech-savvy, making your procedural bias manifest as selection bias.
  • Survey design. Asking leading questions and failing to change the order of choices shown to different participants are two ways to get procedural bias in surveys. Interestingly enough, there’s even a bias that occurs when people are asked to “Select all that apply” as opposed to answering a series of Yes/No questions about each option.
  • Participation incentives. This is a tricky one. You want to compensate people for your time, but you don’t want to create an excessive, response-skewing feeling of goodwill towards your brand. Working with a research recruitment company can help mask your brand identity while still ensuring that participants are motivated to complete the study.  

Is procedural bias more likely in qualitative or quantitative research?

The difference between quantitative (countable) and qualitative (unstructured) research results matters when discussing research bias. For example, selection bias is a much bigger deal for quantitative research because when you accidentally exclude a whole category of subjects from your study, your carefully calculated results are essentially bogus.

Qualitative research is more vulnerable to confirmation bias and other researcher bias examples. This is because there’s no cut-and-dried calculation steps to perform on qualitative data, and the researcher’s judgment can have a bigger impact.

For procedural bias, the effect is spread pretty evenly, and it depends on what aspect of the study implementation is causing the bias. Here are a few things that might impact quantitative data:

  •   If your study includes a survey that asks people to choose their favorite choices from a list, and this list of choices is always in the same order, the data might be biased towards whatever shows up first on the list. This is known as option order effect bias, and you can control for it by changing up the list orderings for different participants.
  •   If you make your survey too long, you might wind up with item nonresponse error, in which people simply don’t answer some of the questions. At the very least, this will lead to an overrepresentation of the fast survey-takers in the questions toward the end. However, for study methods like conjoint analysis which depend on systematically presenting sets of product profiles to respondents, it could throw off all the results.

And a few for qualitative data:

  • Asking leading questions—something along the likes of “How incredibly, phenomenally delicious was our candy bar?”—in open-ended surveys can induce people to respond more in line with the sentiment of the question.
  • Social desirability bias and the Hawthorne effect (the tendency to act differently when being observed) are common problems with focus groups. For example, if you’re trying to learn whether people would be likely to buy your more sustainable product option at a premium, some focus group participants might feel compelled to show that they care more about sustainability than they really do.
  • If your testing environment isn’t the one in which people would typically use your product, then bias could come in the form of missing key product interactions. Central location testing (CLI) often fails to uncover issues with products when people are in a hurry, doing multiple things at once, or using the product over a long period of time.

Tips for minimizing procedural bias-related inaccuracies

We’ve touched on some of these in the sections above. But bias reduction is so vital to getting good study data that some redundancy is warranted.

Survey design is burgeoning research area in and of itself. Some easy ways to improve the authenticity of your survey results include:

  • Giving people a decent amount of time to complete the survey
  • Vary the order of questions to control for survey fatigue
  • Vary the order of choices to control for option order effect bias
  • Avoid asking leading questions

There’s also the question of monadic vs. comparison testing. Monadic testing refers to showing a single product option to a given set of testers or survey-takers so that they can focus on just that one thing. This can reduce the bias caused by comparing one product to another, a common issue with comparison testing. For example, if your testing audience samples two types of soup and one is extremely spicy, it may be hard for them to evaluate the non-spicy soup on its own merits.

Keeping people unaware of your brand being behind the study is a great idea, since you might otherwise run into brand bias or sponsor bias (if you’re a well-known brand, you’ll have both haters and fans, and both will likely be biased). Working with a recruitment service and having your products blinded can help here.

How in-home usage testing helps reduce procedural bias

When we think of research, we often see controlled environments as the gold standard. However, a lot of market research analysis depends on people being in their familiar, organic environments where things aren’t necessarily as clean and orchestrated.

In-home usage testing (IHUT) ensures that people are testing your products in a way that’s authentic to their lifestyle. Think of the difference between trying on shoes at Nordstrom back in the day (you’d walk a few meters in front of a store associate who’s actively trying to sell you the shoes) and actually wearing the shoes during your daily activities (it doesn’t always feel the same).

IHUT also provides a convenient way to conduct long-term studies so that you can capture a range of authentic behavior. You can send out multiple short surveys over time to minimize survey fatigue and get real-time data on consumer perception.

Your best bet for getting (almost) nothing but the truth

Again, it’s about as impossible to completely minimize bias in product testing research as it is to measure a quantum particle without altering it. But with excellent survey design, a setting that puts testers at ease, and a way to keep participants from knowing who’s behind the study, you can get reasonably unbiased data.

Highlight can help you recruit committed product testers whose survey and task completion rates average more than 90% (well above that of other product testing services) and minimize possible procedural bias in your study through survey best practices and product blinding. You’ll also get your data quickly, so you don’t have to spend much mental energy on product ideas that are duds.

Got a product to test? We’ll help you get the cold, hard truth you’re looking for.

RELATED ARTICLES