Using MTurk for Market Research
A surprising trend among our academic colleagues in marketing, psychology, and the social and behavioral sciences, is that they are crowdsourcing respondents for research studies from Mechanical Turk (MTurk). I am still overcoming my own resistance to the idea, as it rubs against Versta Research’s obsessive focus on data quality. Online sampling absolutely can and does work. But I have little doubt from our years of online experience that cheaply-sourced data is often terrible data and generally should not be used.
But these are published articles in respectable journals from respectable academics. So what’s going on, and should researchers in the market research industry be taking a closer look as well?
It turns out that most of these researchers use crowdsourced respondents in the same ways they recruit college students to participate in laboratory experiments. When I took Psychology 101, all students were required to offer 6 hours of time as experimental subjects—signing up for this or that weird study, going into a lab, filling out puzzling questionnaires, engaging in thought experiments, etc. The rationale in psychology is that a human brain is a human brain. Random or representative samples matter less than universal cognitive functions. So it matters little who you get, as long as you get them.
That is largely the thinking behind studies that rely on MTurk “sampling” that we are seeing nowadays. Clinical psychologists want to measure general psychological and behavioral processes. For researchers in social or business sciences, MTurk is being used to pre-test their research protocols—to assess basic cognitive issues, potential biases, or stumbling blocks in questionnaire design. So, for example, they are:
- Testing specific survey questions, and following up with closed and open-ended question to assess comprehension, recall, and interpretation. The idea is to get as much rich feedback as possible, not (yet) to quantify and project results to a population.
- Testing questionnaires and multiple versions of questionnaires for sources of potential error like order effects or measurement biases that would be revealed in distributional differences of data.
- Testing measures for reliability using test-retest methods, as MTurk allows you to re-hire particular workers to re-take your survey again weeks or even months later.
Crowdsourced respondents are not an appropriate sample on which to measure the contextualized attitudes or social behaviors that most marketers care about. So, in general, MTurk is not a promising resource of sample for most of what we do. But it is ideal for early testing and pre-testing of your research protocols. The respondents are inexpensive and easy to get, and the useful feedback you get will come very fast.