13 Threats to Survey Accuracy
Way back in 1944, Edwards Deming published an article in the American Sociological Review that could be required reading for anybody who does research today. He outlined all potential (and unfortunately, common) sources of error in survey research.
Apparently our contemporary obsession with sample sizes, random samples, response rates, and margins of error is not so new. In outlining all sources of error, Deming wanted to emphasize that “sampling errors, even for small samples, are often the least of the errors present.”
So despite some old-fashioned language and defunct technologies (Versta Research has never fielded a survey via telegraph!) we feel it is worth reproducing here what Deming called the thirteen factors “affecting the ultimate usefulness of a survey” as all of them apply as much today as they did 68 years ago:
1. Variability in response
2. Differences between different kinds and degrees of canvass
(a) Mail, telephone, telegraph, direct interview
(b) Intensive vs. extensive interviews
(c) Long vs. short schedules
(d) Check block plan vs. response
(e) Correspondence panel and key reporters
3. Bias and variation arising from the interviewer
4. Bias of the auspices
5. Imperfections in the design of the questionnaire and tabulation plans
(a) Lack of clarity in definitions; ambiguity; varying meanings of same word to different groups of people; eliciting an answer liable to misinterpretation
(b) Omitting questions that would be illuminating to the interpretation of other questions
(c) Emotionally toned words; leading questions; limiting response to a pattern
(d) Failing to perceive what tabulations would be most significant
(e) Encouraging nonresponse through formidable appearance
6. Changes that take place in the universe before tabulations are available
7. Bias arising from nonresponse (including omissions)
8. Bias arising from late reports
9. Bias arising from an unrepresentative selection of date for the survey, or of the period covered
10. Bias arising from an unrepresentative selection of respondents
11. Sampling errors and biases
12. Processing errors (coding, editing, calculating, tabulating, tallying, posting and consolidating)
13. Errors in interpretation
(a) Bias arising from bad curve fitting; wrong weighting; incorrect adjusting
(b) Misunderstanding the questionnaire; failure to take account of the respondents’ difficulties (often through inadequate presentation of data); misunderstanding the method of collection and the nature of the data
(c) Personal bias in interpretation
Technology, by the way, has not eliminated (or even ameliorated) any of these sources of error. At Versta Research, we think about and struggle with these sources of error in nearly every project we do. And there are no magic solutions (no matter what the purveyors of the latest research technologies claim!) except an ongoing commitment to bringing the highest levels of expertise and thoughtfulness to ensure that we, and you, get the data and the story you need.
—Joe Hopper, Ph.D.