The Quintessential Robot Survey
Even the very best survey shops are pressured to do bad work. My recent experience with one bad survey from a fancy survey shop illustrates in multiple ways one of the biggest problems facing market research today: bad surveys via robots, including the newest dreaded incarnation of robots: Artificial Intelligence.
The survey came from a research company that maintains a high-end U.S. probability sample for rigorous polling and market research. Last year my household was selected randomly and invited to join the panel. (Lucky me! The odds of selection are extremely low.)
Along comes a survey five months ago asking me about travel. It was horrible. It asked me to recall specific trips, dates, times of day, costs, lodging … month by month … as if I were filling out a ledger for an expense report. It asked me nothing interesting. It was one of those surveys designed by strategists who love their data grids, forgetting that people are taking their surveys, not robots.
I quit the survey and ignored the reminders asking me to finish. Then another person in my household got the survey, too. He gave up and ignored it, as well. Three months later, another invitation—same survey—shows up. It failed and they were trying again, and my thought upon seeing it for the third time was this: good luck getting people to fill out this survey.
Without some significant investment of human-led fieldwork and incentive design, the most likely respondent to your survey is going to be a robot. Humans will not stand for this. The sponsor, having failed to finish the survey on this panel, will likely move its survey to another online panel where robots will happily become “respondents” overnight. Congratulations.
But I decided not to just up and quit. I responded to my e-mail invitation by saying that the survey was horrible, and somebody would have to pay me a lot of money to complete it. I got a long response back: “Thank you for writing to us. We appreciate your feedback and understand your concerns about the sensitive nature of some of the questions on your surveys. We ask questions such as your income, gender, age and educational level for two important reasons. . . .” There were four more paragraphs of verbiage unrelated to my complaint.
Translation: Our market-research-firm customer service robot received your note and fired off an irrelevant but common response we send to respondents. It does not address your point, but hey it is a robot. It came pretty close, right?
I wrote back, saying, “Your response makes no sense. I expressed no concerns about confidential questions.” Here is what I got back: “Thank you for writing to us. Unfortunately, this survey does not offer bonus points and we understand if you don’t want to complete it. We kindly advise you to ignore the reminders regarding this study because they are sent automatically. We apologize for any inconvenience this may cause.”
Translation: I am a real human and you’re right, that robot response was off-topic. You should just ignore it, and by the way, I cannot turn off the next robot that will keep sending reminders asking you to fill out this survey again, so just ignore that one, too.
In the end, everyone gives up — the survey respondent and the employee at the survey firm, and probably soon the client who commissioned this terrible survey. The only way to get “data” for this effort is going to be from robots, and I feel confident they will get that data and be “analyzing” it soon. It is a shame. It does not have to be that way.
Translation: Even excellent, high-end survey firms that can do insightful and successful work may suffer from the scourge of robot research — and many of them succumb to their own efforts to cut and automate. But you can still choose a company like Versta Research that knows the pitfalls of robot research and works intelligently to keep them away and out of your research!
—Joe Hopper, Ph.D.
OTHER ARTICLES ON THIS TOPIC:
When Strategists Write Questionnaires
How To Find and Eliminate Cheaters, Liars, and Trolls in Your Survey