Concept testing is a classic form of market research. To the uninitiated, it may seem straight forward: place multiple concepts in front of a respondent and ask them to choose their favorite.
This, however, is just one more example of how easy it is to go wrong in market research without the input of professionals and research-grade tools. There’s a reason people go to school to specialize in research and development–because research is a science, where there are best practices, or even right and wrong ways, to conduct your experiments and tests.
To obtain results that will actually help guide your future product to success on shelf, it’s imperative that you use the correct inputs. Here’s how it’s done for concept testing.
Why should I run a concept test?
Over 30,000 products are launched every year. 90% fail. In today’s competitive and challenging environments, it’s more critical than ever to nail winning product portfolios.
That’s where concept testing can help play a key role in mitigating this huge risk. Instead of investing money, time, and materials in prototyping–or, god forbid, going to market with–a product that nobody asked for, brands can stop and get a pulse on what their consumers really want, first, all without developing a single physical product.
Concept testing is also a great way to lay a foundation for future physical product testing and success in market. The basic premise is that injecting your development with research at each phase of the product lifecycle mitigates risk and ensures you’re always using the consumer as your North Star. But the benefits run deeper than that. When combining a concept test with physical product testing:
- Include questions to measure purchase intent with your concept test, and then
- Follow that with in-home usage testing, to see if the real product experience effectively brought your concept to life and delivered against consumer expectations in the organic environments of their own homes.
- Remember to measure purchase intent in the IHUT as well through survey questions like likelihood to recommend or repurchase.
This is the kind of data that can serve as the most powerful indicator of future success on shelf.
Learn more about combining concept and physical product testing.
What’s the difference between a monadic and sequential monadic test?
The basic difference between monadic and sequential monadic test methodology is that in a monadic test, each respondent sees one stimuli, whereas in a sequential monadic test where each respondent sees all stimuli.
(Hint: “Stimuli” is the concept, such as an image of a package design or fabric print, that you are showing to respondents.)
Monadic testing design is recommended for concept testing including pack testing, and is also sometimes used in pricing research. Brands can gather both qualitative and quantitative data by using a monadic design for consumer research.
Pro-tip: If you have a current product, test an image of your current product (i.e., packaging design) with a separate group to act as the control or benchmark.
Why is a monadic test design the right choice for concept testing?
1. A monadic survey design most closely resembles the real-world shelf
First and foremost, one guiding principle to all market and consumer research is mimicking real-world conditions as closely as possible to foster results that will reflect that. In a real supermarket, big box store, or convenience store, a consumer would only see and evaluate one version of your package on shelf to make their purchase decision–so make sure your concept test follows that same logic.
2. More in-depth feedback
Is it easier to write three 500-word essays, or one 1,000 word essay? The same logic applies to concept testing. When you’re asking respondents to dedicate all their mental bandwidth to one concept, you can get the in-depth qualitative feedback you need on one, rather than diluted, shallow feedback on many concepts.
3. Reduced respondent fatigue
This same logic goes hand-in-hand with best practices for survey design. The more stimuli you require respondents to respond to, the more you wear them out or even risk survey abandonment–and sequential monadic tests are always longer than monadic tests. Keep your respondents fresh to keep your data more authentic and accurate.
4. Cleaner, easier-to-analyze data
In any survey design, you want to make sure you’re comparing apples to apples. That’s much easier to do when the results for each concept are independent. That way you can essentially place them side-by-side to directly compare results and make an easier, faster decision.
5. Avoids interaction bias
When you ask respondents to evaluate multiple concepts, you risk receiving less honest and accurate scores. It’s not because respondents start lying to you–it’s because of biases all us humans can fall prey to. For example, respondents may evaluate concepts based on the order they saw them in. They could rush through the final concept as they experience survey fatigue and rush to finish. They may also give feedback that’s colored by the order in which they saw each stimuli, as the first one they saw may stick in their mind as a favorite–just because they saw it first. Testing multiple concepts in the same survey with the same respondents introduces all these potential problems and biases.
4 best practices for setting up a monadic concept test
When setting up a monadic concept test, it’s important to follow these four best practices:
1. Choose consistent audiences
If you choose an audience of Gen Z women from the Northeast to test Concept A, you better choose an audience with the same demographic and behavioral parameters for Concept B. Make sure you’ve isolated the variable and you’re not muddying data by significantly changing the make up of each audience testing your concepts.
2. Use the same survey for each concept and audience
Remember, the goal is to be able to cleanly and clearly compare apples to apples, so if you ask one question on one concept to one audience, you better ask the same one to your second audience and concept–that way you can accurately compare performance across concepts, choose a winner, and eliminate a loser.
3. Give the opportunity for open-ended feedback
Qualitative data can be the key to insights you never thought you’d uncover, because we don’t know to specifically ask about what we don’t know. When you craft multiple choice questions, consider adding an open text field to an “Other” option. Include open-ended questions that encourage respondents to expound on their thoughts or mention anything they didn’t get a chance to say elsewhere in the survey.
Another popular option for those who run concept testing with Highlight is to include video responses to get the full context of a shopper’s emotional response to your concept.
4. Note what respondents don’t say
Did you have 17 internal meetings about what shade of purple to use on your packaging, only to have none of the respondents even mention the purple part in their open-ended questions?
It’s always great to add qualitative components to your concept testing–whether that be an open-ended question or a video testimonial–but particularly with concept testing, note what your respondents didn’t comment on. Did they miss your B Corp label completely? Were they silent about your protein content claim? And consider iterative testing to see if modifying the elements they didn’t comment on produce different outcomes.
Turning your monadic concept test into action
A monadic concept test done right paves the way for moving forward with confidence. With results on each concept across consistent audiences and surveys, you can compare outcomes, see where each concept wins and loses, and make an educated decision on the best way to proceed–all without investing a penny in a physical product.