Please Don’t Mix Your MaxDiffs
Customizing MaxDiff exercises by piping in text from previous survey answers might seem like a good idea, but analyst beware: if you are estimating individual-level scores using hierarchical Bayes or latent class analysis, you must split the data into subsets before you calculate the scores. Your results will be nonsense otherwise.
Here is the scenario I reviewed on one recent questionnaire. The team wanted to test the strength of 10 new concepts, setting up their MaxDiff exercise as follows:
Q. Which of the following products did you recently buy? (Product A, Product B, Product C, Product Z … there were quite a few choices—upwards of 26, like in this example.)
Q. Thinking about the [INSERT PRODUCT PURCHASED FROM PREVIOUS QUESTION] that you just bought, which new concept do you like best, and which one least? (Full set of MaxDiff scenarios with 10 concepts shown multiple times in different combinations.)
True, this seems like a nice way to make the concept test more relevant by linking it back to a specific product. It is also a more efficient way to program and deliver the MaxDiff exercise to respondents. But here is the problem: Even before you calculate your MaxDiff scores, you must split your data into 26 subsets.
Why? Because most MaxDiff analysis programs use statistical techniques that “borrow” data from other respondents. With MaxDiff, data for each particular respondent is sparse. We use techniques like hierarchical Bayes or latent class analysis do some fancy work of filling in the blanks based on the hierarchy of ratings from others. If those others are evaluating concepts with regard to different scenarios, that “borrowing” makes no sense.
Looking at this questionnaire, I imagined what was likely to unfold. Somebody would submit the data into their number crunching machine. It would borrow data from all other respondents and calculate individual MaxDiff scores. They would build a table to show differences in scores based on the product that was shown, and voila – nary a difference save the one in 20 that happens by chance!
MaxDiff is a popular (and powerful) technique because it is easy to administer and it helps differentiate long lists in ways that other survey techniques do not. But as always, it is worth knowing at least something about the mathematics behind the calculations to save yourself from bad mistakes like mixing your MaxDiffs.
—Joe Hopper, Ph.D.