The Error in Your Smartphone Surveys
Any good researcher should agonize over mode effects in surveys. Mode effects are differences in statistical estimates caused by the “mode” through which respondents take a survey. If there are mode effects, then how the survey is conducted (by telephone, online, through a smartphone app, in person, or by mail) will affect the results, requiring statistical adjustments and important caveats.
A research report just published in the Journal of Survey Statistics and Methodology offers new insights about mode effects for smartphones vs. desktop computers. The authors conducted a survey with a probability-based web panel (recruited through traditional, fully randomized polling methods, like address-based sampling) and then built a clever experimental design, reaching out in waves via different modes to subgroups of the panel.
They measured the overall mode effect of deploying surveys on smartphones vs. desktop computers, and were then able to decompose the mode effect into three components. Here is what they found:
Across 19 measures of technology use, lifestyle, and political views, the mode effect was small: just 2.3 percentage points overall. Differences between the two modes were statistically significant on just three of the 19 measures.
More interesting (to me) was the decomposition of the measured mode effect into three components:
- Coverage error. This means that not everyone has a mobile phone, so it is impossible to reach some people if you conduct a survey via this mode only. That may lead to errors in statistical estimates, as it did in this experiment. Coverage error was the largest component of the overall mode effect in this experiment, amounting to 1.5 percentage points.
- Nonresponse error. This means that people might be more or less likely to respond to a survey if invited to take it on a phone vs. a desktop. If so, it will lead to errors in statistical estimates, as it did in this experiment. Nonresponse error amounted to 1.0 percentage points.
- Measurement error. It is possible (and we always worry) that people will respond to questions differently on mobile screens given all the formatting adjustments required. But in this experiment the overall difference between mobile and desktop was just 1.1 percentage points, with none of the differences on the 19 measures being statistically significant.
The authors conclude that there are, indeed, measurable coverage and non-response errors in mobile surveys, but the effects are small. Further, they find no evidence of measurement effects, stating that “conditional on coverage and response to the survey, respondents gave similar responses on smartphones and PCs.”
Given how many people answer surveys on smartphones these days, this newly published research helps me rest easier. Even if there are measurable mode errors built in, they are much smaller than the other sources of error we need to worry about, like robots and other threats to the validity of survey data.
—Joe Hopper, Ph.D.
OTHER ARTICLES ON THIS TOPIC:
Surveys on Cell Phones Are Just as Good