Why We Don’t Use Qualtrics
It’s the data. We want it all. And to be fair, this article is less about Qualtrics, and more about what makes for truly good and rigorous research. It just turns out that Qualtrics is the one tool that makes it more difficult and more expensive to do that kind of research than any others we tested in our recent search for the latest and greatest in survey tools. As such, it is the one tool that provoked a great deal of internal discussion about what counts as good work in the market research industry.
At Versta Research we are obsessed with having all our data. We monitor, save, and evaluate every person (or, yes, bot) who tries to take a survey. We want to know who completes, who doesn’t qualify, who hits quotas, and who quits. Having a complete profile of data from all respondents and would-be-respondents is essential. It helps us assess things like non-response bias, panel quality, errors in programming, basic parameters like population incidence, and the need for weighting. At times we apply weighting to the full universe of completed interviews plus non-qualified terminates because there are no good data available to weight on a study’s unique qualifying criteria.
It all sounds complicated, but it’s not—the crucial point is that we can’t truly understand who is in IN our survey (and who we are reporting on in the findings) without also knowing who is OUT. And we can’t know who is out unless we have that data to analyze.
What does this have to do with Qualtrics in particular? For all its popularity, ease of use, and truly top-notch functionality, it is the only survey tool among a half dozen we tested that would not give us access to all data without tagging partial completes, over-quotas and screen-outs as completed interviews. It’s a serious limitation, and it requires an awkward and expensive work-around to overcome.
It was surprising for us to realize that many in our industry never even look at the data we insist upon having, much less consider it an essential piece of their work. It was even more surprising that one of the hottest tools in the industry would bake that approach into the tool itself.
For us, our search reaffirmed the crucial importance of thinking broadly about research when evaluating and selecting tools. And it offered a useful lesson about why researchers should never ignore or toss out the “bad” or “useless” data. We, and you, should want it all.