Last quarter we provided some quick facts about response rates and survey accuracy. Here is more food for thought on the issue of response rates and whether response rates really matter.
Plus we offer a few links to new tools, survey satire, and thought-provoking articles about how research is playing out in the world of business.
Take a look. And if you want to discuss whether response rates matter on your project, please give us a call!
The Versta Team
Do Response Rates Really Matter?
When we survey an organization’s customers, members, or prospects, the question always comes up: “What was the response rate? Is that high enough? What is a typical response rate?” Within the polling industry, there has been considerable worry over the last two decades about declining response rates for surveys of all types, including surveys backed by million-dollar budgets for government agencies.
Concerns about response rates have solid theoretical grounding. You can’t make inferences about a larger group if people in the group won’t talk to you. Most of our statistics and margins of error assume having information from every member of a random sample.
But the reality for most research is that response rates are not high. And yet our findings are still accurate and there is evidence that sometimes lower rates give us more accurate findings. The reason? What matters is not how many people respond to a survey, but how representative they are of the groups to which they belong.
Low Response Rates
The 2000 U.S. Census achieved a response rate of 67%. That’s with ten years of planning, billions of dollars supporting a massive organization, and aggressive efforts to re-contact and encourage every non-respondent. The 2009 National Survey of Substance Abuse Treatment Services, a government survey of organizations rather than individuals, had a response rate of 45%. The Pew Research Center has reported that response rates for its public opinion polling dropped from 36% in 1997 to 27% during the next six years. A June 2009 discussion among members of AAPOR (The American Association of Public Opinion Research) suggests that most surveys of an organization’s own members will get response rates around 20%. This seems pretty lousy. But does it matter?
Surveys Are Still Accurate
A few years ago the Pew Research Center released findings from a study showing that despite lower response rates, surveys conducted by reputable research organizations are still accurate. They conducted an experiment and showed that a rigorous (and costly) effort to double response rates made no difference in the statistical outcome. A study conducted by Penny Visser and colleagues at the Ohio State University in the late 1990s found that a mail survey with a lower response rate was more accurate than a telephone survey with a higher response rate. Numerous other studies confirm that higher response rates do not necessarily improve accuracy.
Representation Is What Matters
The reason is that lower response rates do not necessarily mean that a sample is skewed. And it turns out that for most studies, the people who will not participate are typically no different from the people willing to give us their time. That is, they usually have the same attitudes and behaviors, so it does not really matter that they refuse to participate. The most critical thing about sampling, no matter what the sample size and no matter what response rate, is that the part must represent the whole. If it does, your conclusions will likely be accurate.
Of course this does not mean that anything goes, or that any hodgepodge of convenient respondents will suffice. It is critical to profile survey respondents on key dimensions such as demographics to ensure that they look like the full population of interest. More often than not one needs to (1) find more of the respondents that are missing, and/or (2) statistically weight the data to adjust for bias.
The bottom line is that response rates matter to the extent that they can indicate response bias, but that even a low response rate can yield robust results. And an unusually low response rate, measured against what is typical for the times, can signal something more seriously wrong with the survey design or implementation.
The lesson over the past two decades of declining response rates is that good research always aims for the highest standards and employs rigorous efforts to ensure maximum participation. But with careful execution and analysis, it can be amazingly resilient to the practical difficulties encountered.
New Tools, Surveys on Surveys, and More
Results are trickling in from a rigorously designed research study that examined over 100,000 completed survey responses from 17 panels, sponsored by the Advertising Research Foundation.
This NPR article is an amusing satire on “the outrageous proliferation of surveys in contemporary America.”
If you find yourself spending a lot of time and money digging up answers to questions from data that already exists, this new search tool may help:
“It computes the answers to queries using enormous collections of data . . . It can quickly spit out facts like the average body mass index of a 40-year-old male, whether the Eiffel Tower is taller than Seattle’s Space Needle, and whether it is high tide in Miami right now.”
An article in The New York Times poses the question:
“CAN a company blunt its innovation edge if it listens to its customers too closely? Can its products become dull if they are tailored to match exactly what users say they want? These questions surfaced recently when Douglas Bowman, a top visual designer, left Google. . . . . None of this means that input from users is unimportant. Indeed . . . designers must find a multitude of ways to understand users’ needs at a deeper level.”
The time spent on social network sites now exceeds the time spent on email – a shift in how people use the Internet.
MORE VERSTA NEWSLETTERS