Finding Fraud in Public Polls: Our AAPOR Presentation
Versta Research is presenting this week at the 76th annual conference of the American Association for Public Opinion Research (AAPOR). The conference is being held virtually from May 11 to 14, 2021.
The presentation is called Finding Fraud in Public Polls: Employing Semantic Network-Based Methods for Identifying Fraud in Online Sampling. It is a reflection of the academic, science-y side of Versta Research.
(For the “just-tell-me-what-it-all-means” side of Versta Research … keep reading … that comes at the bottom of this post!)
Here is the abstract:
Online opt-in panels are increasingly being used in political polls and public opinion surveys, with 80% of current public opinion polls using at least some online respondents. However, online data collection suffers from numerous threats to data quality, including acquiescence bias, satisficing, and random responding. This increases the error of point estimates and creates illusory correlations. Researchers use a variety of approaches to mitigate against online data quality threats, including attention checks and other in-survey measures. However, these approaches suffer from multiple methodological limitations, most notably the lack of validation. As a result, typical solutions are either too stringent and prevent good respondents from participating (false positives), or too weak and allow bad participants into the survey (false negatives). Here, we present several solutions to mitigate against both random responses and satisficing. We use a semantic network model approach to create sets of words with a quantifiable associative similarity. Weights are assigned to word-pairs based on the analysis of English language corpora, and specific objective difficulty thresholds are assigned by varying the weights between targets and response options. These stimuli are then used as measures of attention, engagement, comprehension, and inauthentic behavior in pre-survey screens. This approach was further improved upon by adding additional quality measures that check for acquiescence bias. A library of validated stimuli was developed and employed to examine the quality of respondents across several large online surveys. Data across these studies suggest that pre-survey screens prevented over 70% of problematic respondents from entering surveys, had a low false positive rate, and performed nearly as well as postsurvey quality review. Overall, we propose that this approach can be used at scale to systematically prevent problematic respondents from entering online surveys across the vast network of opt-in panels that supplies respondents for online polls.
And here’s what it means for you, and why we hope you can attend: Are millions of Americans really gargling with bleach to fight COVID-19? Of course not. But we’ll show you (and de-bunk) published(!) research that says they are. And we’ll show you how to avoid such nonsense in your own work.
Please join us at the conference on Wednesday, May 12, from 2pm to 4pm EDT. If you are unable to attend, the full presentation deck with all of the study’s findings is available as a free download.
—Joe Hopper, Ph.D.