The Problem with MaxDiff
MaxDiff is a powerful method and it is increasingly popular among market researchers. But it is not always the best choice for measuring the importance of attributes, and here’s why.
Suppose you want to measure the importance of 12 attributes for a new product or service. If you know ahead of time that consumers are going to say that all 12 are extremely important to them, then MaxDiff is an excellent method for differentiating among the attributes so you can focus on the top two or three that matter most.
But what if you don’t know that all 12 attributes are extremely important? Maybe none of them are. Maybe they run the gamut from unimportant to extremely important. The problem with MaxDiff is that it only tells you the importance of attributes relative to each other, but it won’t tell you whether the attributes are important. The MaxDiff model will assign ratio-level numbers so that you can rank and quantify the importance of each attribute vis-à-vis the others. But it will not anchor the attributes in a meaningful way.
This week we are designing a study in which we want to differentiate among attributes, but we also want to measure the gap between satisfaction and importance for items that are truly important to our target market. We cannot do that with data from a typical MaxDiff study. So we are using an old fashioned importance rating scale instead.
As always, it is critical to think about the story you want to tell with your research data, and then work backwards to the design and the choice of methods. In many cases MaxDiff is the perfect tool. In other cases it will leave you with data that is difficult to apply to critical questions.
Feel free to give us a call if you need some help deciding among the best methods for your research, whether it be MaxDiff, other conjoint techniques, or something else entirely. We’ll help you focus on the story you need to tell and on the research design you need to tell it.
—Joe Hopper, Ph.D.