Polling Group Gives Nod to Online Surveys
The purists in the polling industry who have always insisted that only probability sampling is valid may finally be accepting that their methods are dead. This is one happy conclusion I draw from the debate over AAPOR’s recent task force report on non-probability sampling, which is nicely summarized (and debated) as the lead article of this month’s Journal of Survey Statistics and Methodology. If you have any interest at all in whether online panels work, I strongly urge you to read this summary and debate.
The task force (tentatively) concludes that “a non-probability sample can be used to make inferences about a target population, although . . . there are associated risks.” And while the approach is tentative and cautious, it is a huge step for a professional association that has steadfastly opposed the use of non-probability samples for many years. AAPOR (The American Association for Public Opinion Research) has always argued that although carefully designed non-probability research seems to work, there is no unifying theoretical basis for it. Even as recently as 2010, the association issued a report saying that researchers who want to generalize results to larger populations should not use online panels.
Now, at last, we have a more considered assessment of the possibilities and challenges. Here we quote the task force’s eleven conclusions from their work:
- Unlike probability sampling, there is no single framework that adequately encompasses all of non-probability sampling.
- Researchers and other data users may find it useful to think of the different non-probability sample approaches as falling on a continuum of expected accuracy of the estimates.
- Transparency is essential.
- Making inferences for any probability or non-probability survey requires some reliance on modeling assumptions.
- The most promising non-probability methods for surveys are those that are based on models that attempt to deal with challenges to inference in both the sampling and estimation stages.
- One of the reasons model-based methods are not used more frequently in surveys may be that developing the appropriate models and testing their assumptions is difficult and time-consuming, requiring significant statistical expertise.
- Fit for purpose is an important concept for judging survey data quality, but its application to survey design requires further elaboration.
- Sampling methods used with opt-in panels have evolved significantly over time, and, as a result, research aimed at evaluating the validity of survey estimates from these sample sources should focus on sampling methods rather than the panels themselves.
- If non-probability samples are to gain wider acceptance among survey researchers there must be a more coherent framework and accompanying set of measures for evaluating their quality.
- Although non-probability samples often have performed well in electoral polling, the evidence of their accuracy is less clear in other domains and in more complex surveys that measure many different phenomena.
- Non-probability samples may be appropriate for making statistical inferences, but the validity of the inferences rests on the appropriateness of the assumptions underlying the model and on how deviations from those assumptions affect the specific estimates.
The commentary to the published summary provided by five additional industry experts is particularly revealing, as four out of five seem to be pushing the task force and AAPOR to go even further. We agree. The pretense that any public surveys are relying on probability samples is no longer tenable, and the task force’s report now leaves little doubt that we, as an industry, need to figure out how to make online panels and other non-probability samples work. To be sure those challenges are huge, but AAPOR has finally acknowledged the need (and promise) of doing so.