Testing the Impact of Low Response Rates
Survey response rates, which are still a measure of data quality and survey success in our industry, continue to drop. Most online surveys of customers (one’s own customers) will garner response rates of two to three percent—and that is if you are lucky and if you do everything right. Rigorously executed phone surveys achieve miserably low (and expensive) response rates of roughly one percent.
Higher response rates are possible with extreme diligence, time, incentive payments, and expert strategy. But time and money are not always practical solutions. So the question we usually face is not how to fix low response rates, but rather how to assess the impact of low response rates on our data.
Recent data from a team of public health researchers at Boston University has added to a growing body of literature showing that higher response rates may not matter. Their research is a particularly welcome addition because it was unique in two important ways: First, their data is from a B2B survey of very high level (chief officer) executives who are notoriously difficult to reach for surveys. Second, they achieved a remarkably high 95% response rate, which gave them the perfect vantage point for some clever analysis.
Here is what they did: They compared the final outcome of their extremely-high-response-rate-survey to earlier snapshots of their data when the response rate was low. What if they had stopped early? What if they had settled for less-than-perfect? How would the results have been different from what they found with near-perfect 95%?
The researchers looked at respondent demographics, characteristics of the firms represented, responses to factual questions, attitudinal measures, and so on. They found:
Across waves, there were no significant differences between responses to two factual report questions or the single- or multi-item scale measures of attitudes. According to a “what-if” analysis of cumulative results by wave, the same conclusions would have been reached if data collection had been halted at earlier points in time.
Oddly enough, there are still knowledge gate keepers out there who refuse to publish results from surveys with low response rates. The Journal of the American Medical Association (JAMA), for example, requires documented response rates of “generally at least 60%,” which betrays a surprising ignorance of scientific evidence.
When it comes to your surveys, we urge you to focus on the scientific evidence. As the authors cited above conclude from their work, “Results from ‘low’ response-rate surveys should be considered on their merits, as they may accurately represent attitudes of the population. Therefore, low response rates should not be cited as reasons to dismiss results as uninformative.”
—Joe Hopper, Ph.D.