Your Margin of Error Is Probably Wrong
Even if you are not involved in political polling, it is worth paying attention to the methods and best practices of political pollsters. One reason is that few other areas of research offer a way to completely validate one’s methods. Pollsters are using sampling and survey methods to predict the behaviors of a much larger population. Then in just one day that population behaves, we get a near-perfect count of exactly how they behaved, and we know whether the methods worked.
Several industry colleagues have recently been debating the merits of calculating and reporting “margins of error” in political polling, and pointed us to some surprising data from The New York Times:
[The New York Times has compiled] a database consisting of thousands of primary and caucus polls dating back to the 1970s. Each poll contains numbers for several candidates, so there are a total of about 17,000 observations. How often does a candidate’s actual vote total fall within the theoretical margin of error? The answer is, not very often. In theory, a candidate’s actual vote total should fall outside the margin of error only 5 percent of the time [given that political polls report margins of error using a 95% confidence interval]. In reality, the candidate’s vote total was outside the margin of error 65 percent of the time! Part of this is because the database includes some polls conducted months before the actual voting took place. But even if you restrict the analysis to polls conducted within the final week of the campaign, about 40 percent of the vote totals fell outside the margin of error — eight times more often than is supposed to happen if you could take the margin of error at face value.
This does not mean that the polls were wrong, predicting wins for losing candidates and vice versa. Rather, it means that the estimates were not as precise as the stated margins of error would have a reader believe.
The problem is that “margins of error” are based on a statistical theories that almost never line up with the messy reality of our world. Margins of error make a number of assumptions which are rarely true in practice, including:
- Respondents are selected through simple random sampling
- All those sampled participate in the survey
- Sampling error is the only source of survey error
Indeed, Versta Research usually recommends to clients who publish survey research that they not report margins of error because the concept (and the calculations) are seriously misleading and flawed.
Calculating margins of error and looking at statistical significance should be used not because they give accurate or “scientific” predictions, but because they provide useful summary measures of how much variability there is in the data given the sample size and other critical factors that can affect one’s estimates. At Versta Research, this helps us better interpret data and better assess what matters. That, in turn, allows us to tell a story with the data that does not overreach or misrepresent what is going on.
—Joe Hopper, Ph.D.