Don’t Waste Your Money Boosting Response Rates
Extremely careful researchers work hard to make sure their surveys reach broad and representative samples of their target populations. This usually involves multiple attempts to reach out via multiple phone calls, e-mails, or letters of request. Since statistical models of inference presume 100% response rates within random samples, the goal is always to get response rates as high as possible.
But does it really matter? Will our statistical estimates suffer if we simply do our best up front, and then call it quits after just one attempt to reach out? New research suggests that in many cases our findings and estimates will not suffer.
The research was conducted by a British team of academics that included statisticians, sociologists, and demographers. They analyzed data from six different rigorously designed and executed UK phone surveys. They examined what happened to point estimates and variability of 559 different variables across all six surveys as the number of outreach attempts increased. Here is what they found:
Most variables are surprisingly close to the final achieved sample distribution after only one or two call attempts and before any post-stratification weighting has been applied; the mean expected difference from the final sample proportion across all 559 variables after one call is 1.6 percent, dropping to 0.7 percent after three calls and to 0.4 percent after five calls. These estimates vary only marginally across the six surveys and the different types of questions examined. Our findings add weight to the body of evidence that questions the strength of the relationship between response rate and nonresponse bias. In practical terms, our results suggest that making large numbers of calls at sampled addresses and converting “soft” refusals into interviews are not cost-effective means of minimizing survey error.
What is fascinating, too, in looking at their data, is that all the efforts to boost response rates worked really well, even in today’s environment of very low response rates. They were easily doubling or tripling their response rates from initially low rates. Ultimately they achieved response rates ranging from 54% to 76%.
But was it worth it? Surely there are some surveys where the additional precision of 1.6 percentage points is important. It is worth it for government surveys, because so many municipalities, agencies, people, and companies rely on the precision of those numbers. But for lots of other surveys, this marginal boost to precision is probably not worth it.
If you design your research and sampling plan well, and devise careful ways of ensuring a representative sample (even if not random) then lots of evidence suggests you can continue to feel confident in your results even with miserably low response rates of one or two percent.