Highlight Blog

Compare Product Usability Through Competitive Testing

Written by Vicky Frissen | 10/6/25 2:10 PM

Competitive usability testing is hard to get right. You need to get rid of any and all bias, differentiating elements, but still keep essential USPs in. It's the new Mission Impossible, but for product researchers. Which is why a lot of companies skip it altogether, or even worse: do it wrong, and use the results anyway. 

The concept of this type of research isn't flawed–the execution is. Small details that seem trivial can completely flip competitive rankings. But when competitive testing is done properly, it can reveal golden market opportunities that can genuinely change your competitive position. 

The difference between actionable insights and expensive bias comes down to understanding how methodology determines data quality.

This article isn't about whether you should do competitive testing. It's about how you can do it right.

 

The eye opening value of blind competitive testing

Competitive usability testing is only as good as its blinding execution. But try redacting elements of shampoo bottles without removing the design elements that actually affect how people use them. And good luck blind testing beverages while preserving the authentic drinking experience. Don't even get us started on blind testing cars. 

But despite these (huge!) challenges, you still need unbiased data about where your product stands. That means you need to solve those execution problems that often force competitive testing to just be elaborate brand preference surveys. And the good news is: blind testing doesn't mean completely stripping products of what makes them great. It's often more nuanced than that.

 

What to hide and what to keep in blind testing?

You might wonder what elements you'd need to hide for your own product and what you'd keep visible. The truth is, it depends on both your category and your specific testing objectives. 

For instance, if you're doing competitive testing focused on packaging ergonomics, you'd hide brand logos but preserve the actual bottle shape and grip design. 

But if you're testing product formula performance, you might use identical generic containers and focus entirely on the product experience itself. The examples below show how different categories approach this balance:

Category

Brand Identity Challenge

Usually Hide

Usually Preserve

Food & Beverage

Branded packaging, bottle shapes, logos

Remove labels, use identical serving containers

Actual taste, texture, consumption experience

Personal Care

Brand tubes, pumps, applicators, logos

Generic containers, remove brand applicators

Scent, texture, skin feel, application experience

Household Cleaning

Brand bottles, trigger shapes, color schemes

Identical spray bottles, remove brand colors/logos

Cleaning performance, scent, actual product formula

Beauty & Cosmetics

Brand compacts, applicators, packaging colors

Remove branded compacts/brushes, neutral containers

Product color, finish, application performance

Pet Products

Brand bags, shapes, character imagery

Standard bowls, remove brand graphics

Actual product (taste, texture, pet response)

Why test in-home instead of in-house?

Why not just have product experts handle competitive testing in a controlled lab? Wouldn't that make it much easier to control the tests, handle the products, and jump over any hurdles along the way?

Sure. But the results wouldn't be nearly as valuable, for two reasons.

First, ''in-house'' testing produces completely different results from testing ''in actual consumer homes''. Product experts understand how things are supposed to work, so they automatically work around design problems. Real consumers get annoyed, get it wrong, and get something else.

When someone uses your product wrong or struggles with what seems obvious, that's not user error. That's market research gold. It tells you exactly where your competitive advantages and weaknesses actually exist.

Second, it's not that experts won't have valuable insights, but competitive usability testing should capture the consumer viewpoint, not someone who's paid to develop products that need to score well. 

Sure, real-world consumers might not articulate their opinions quite as well as product experts, but that is kind of the point of all of this. 

And apart from professional bias, experts approach products with completely different expectations and limitations than your actual customers. 

They're not wrong, they're just not your customer.

 

Setting up testing that actually works

Anonymizing products without destroying their unique selling points requires operational expertise that most teams simply don't have. Which is exactly why outsourcing this type of testing often makes perfect sense. Here are the challenges you'll deal with along the way:

Packaging anonymization that doesn't ruin everything: Remove the obvious brand screaming, but keep the functional stuff that affects how people actually use your product. Consistent labeling across all test products. If your packaging design is the main differentiator, focus testing on what happens after people open it.

Sequence and environment controls: Randomize which product people try first - always testing yours last is like asking "don't you think this one's better?" Test everything under identical conditions. Give people enough time between products so they're not just comparing to whatever they just tried.

Category-specific considerations matter. Food products need different approaches than skincare. Household cleaners need different controls than beauty products. The most effective product testing research accounts for these differences from the start, not as an afterthought.

Track behavior, not just opinions. What people actually do matters more than what they say they prefer. Usage frequency, which product they reach for when given choices, how long it takes them to complete tasks - these behavioral signals cut through the social desirability bias that makes people give "nice" answers.

 

What to do when your results…don't make sense

Look, competitive testing isn't just going to reveal what holy grail you should add to your product to make a bazillion dollars. Real competitive testing produces messy data that doesn't always align with your expectations. The holy grail is in there, but it will require some digging. And it's exactly when results surprise you, that there's often valuable insights hiding around.

Time changes competitive dynamics. The product that wows people initially might become their least favorite after a week of use. The one that seems confusing at first might win their loyalty once they get used to it. Single-session testing misses these crucial patterns.

Individual preferences vs. household reality. The person who loves your cleaning product might not be the one who decides what gets purchased. When testing products that multiple people use or when usage and buying decisions involve different family members, make sure you're testing with the actual decision-makers.

When competitive data contradicts your internal assumptions, resist the urge to dismiss it. Those uncomfortable findings often reveal market opportunities or development blind spots that are worth way more than confirmation of what you already believed.

 

Building this into how you actually work

First, let's stop treating competitive testing like an annual research project. In fast-moving CPG categories, yearly competitor studies are outdated before the ink dries on the reports, or the WeTransfer link expires, to keep things #relevant.

Build it into development cycles. When testing reveals a competitor weakness, you can immediately add features that capitalize on that gap instead of waiting months for the next research cycle. Speed matters in competitive categories.

Let data drive what you build next. Product benchmarking insights help teams prioritize features based on actual consumer struggles with competitor products rather than internal guesses about market needs.

Regular competitive testing also prevents you from reinventing wheels. When competitors have already solved problems you're working on, learn from their solutions instead of wasting development time on inferior approaches.

 

Making sure your investment actually pays off

PSA: Bad competitive testing is worse than no competitive testing. When you're guessing, you know you're guessing. When your testing is flawed, you're confident about insights that are actually just participant bias in disguise. Then you've not only wasted money and time on the research, you'll likely waste more of the same making product decisions that don't make much sense.

Execution expertise determines everything. You need systematic ways to anonymize packaging without losing functional elements. You need protocols for randomizing sequences without creating weird order effects. You need methods for controlling environmental factors while keeping real-world context.

Most teams don't have this specialized operational knowledge because competitive testing isn't their day job. Product benchmarking tools that handle the tricky execution logistics while maintaining proper methodology provide clear value by ensuring you get unbiased data instead of expensive confirmation bias.

Methodology determines whether you get intelligence or illusion. Properly executed competitive testing reveals market opportunities and validates strategic decisions. Poorly executed testing makes you confident about strategies that bomb in the real market.

Your competitive testing is only as good as its execution quality. Make sure your approach actually predicts market performance instead of just producing impressive-looking reports that don't drive meaningful decisions. In crowded CPG categories, understanding how your product truly stacks up against alternatives isn't nice-to-have research–it's essential intelligence that determines whether your strategies succeed or fail.