Big Fat Margins of Error
Whenever you see those “margin of error” statements on political polls and other published surveys (even in market research industry publications, unfortunately!) take them with a grain of salt. They reflect only one type of potential error in surveys: namely, sampling error associated with sample size. Chances are that the “real” margin of error (if such a thing could ever be calculated) is much larger.
True, it is necessary that readers know a survey’s sample size. And readers of survey findings need to understand that, all things being equal, smaller samples are more prone to error than larger ones. But margin of error statements are usually misleading because there are other sources of error, too, which said margins do not take into account.
What are the multiple sources of survey error that we ought to be paying attention to? Here are the sources outlined by Robert Groves (former director of the U.S. Census Bureau, and currently a professor of math, statistics, and sociology at Georgetown University) in his book, Survey Methodology:
- Construct invalidity happens when a survey question fails to measure what we presume to be measuring. For example, asking a person how well they feel is not a valid measure of overall health as they may have a disease that affects them in other ways.
- Measurement errors occur when the survey scale, tool, or instrument fails to capture the true value of interest. For example, sometimes scales are too blunt, too refined, overly complicated, or labeled inappropriately.
- Processing errors take place at the back end during the cleaning, coding, and tabulation process after data are collected. For example failing to assign missing values or inadvertently flipping scale values are all-too-common types of processing errors.
- Coverage error is a mismatch between the sampling frame from which we select respondents and the true population we hope to measure. If we are surveying physicians, for example, but our list from which we draw that sample has some physicians missing, then we have coverage error.
- Sampling error stems from the potential mismatch between a sample selected for inclusion in a survey and the population it is supposed to represent. Whether we employ random sampling or some other type, there is always a chance that the selection of respondents is skewed.
- Nonresponse error occurs when the people selected and invited for inclusion into a survey fail to participate or selectively skip questions we ask of them. Response rates are abysmally low nowadays, increasing the likelihood of serious non-response errors.
At Versta Research we caution against stating a sampling margin of error if possible, because few people interpret it as cautiously as it should be. We believe a better approach is to fully disclose your sample size, along with all methods and processes used in designing and executing the research. And then offer up appropriate caveats and cautions about your research findings given all the sources of potential survey error beyond just sampling.
OTHER ARTICLES ON THIS TOPIC:
In Interactive Graph for Choosing Sample Size