6 Top Causes of Measurement Error in Surveys
Having focused intently over the last several months on best practices for writing surveys (see Five Last Steps in Writing a Questionnaire and How to Write Surveys: 25 Best Practices) I have been paying extra attention to the work of research methodologists who codify best practices and develop protocols for implementing them.
One set of protocols was just published in the Journal of Survey Statistics and Methodology. The authors propose a method by which multiple, independent coders read questionnaires and score them on criteria that are related to measure error. I will not go into the particulars of what they are proposing or why. Unless you’re managing gigantic and complicated data collection systems with multiple survey instruments like federal agencies, you (and we, at Versta Research) will probably never do these things.
But the interesting thing in the article was their choice of factors to be used in the development of their method.
They culled a subset of factors from an extensive literature review of survey characteristics that lead to measurement error. And they surmised these six to be top sources of survey measurement error for general population surveys
- Difficult language in questions — which means either difficult words (big words, specialized words, advanced vocabulary, many syllables, etc.) or complicated sentence structures (long sentences, multiple clauses … kind of like the sentences in this article).
- Difficult language in response options — which means either difficult words (same as #1) or anything that requires complicated cognitive action (like sliding bars or abstract visual representation — we wrote about one such example just a few weeks ago).
- Non-centrality — which means that the question asks for knowledge or experience that lies outside the daily life of an average respondent (asking about the quality of public transportation among people who always drive cars, for example, or about school policies among people without children).
- Sensitive to emotions — which means asking questions that may arouse negative feelings like anger, distress, sorrow, or despair (surveys about hot-button political issues may do this, as well as surveys about death, like I wrote about in My Dog Died and I Got a Survey).
- Sensitive information — which means asking questions about topics viewed as “personal” even if those topics are not emotionally fraught (things like income, sexuality, religious faith, past criminal behavior, and so on.)
- Presumed filter question — which means any question that an average respondent assumes might have follow-up questions based on how they answer (because some people will want to avoid follow-up questions, while others hope that follow-ups will qualify them to receive incentives).
If you are not yet a seasoned survey writer, keep these six factors in mind along with those other rules of thumb for writing surveys, like avoiding biased questions, or offering balanced answer scales. The list provided by these methodologists suggests that the biggest sources of error are more fundamentally about the kinds of questions you ask and the words you use to ask them.
—Joe Hopper, Ph.D.