28 Questions to Ask before Buying Online Sample
With all our excitement over the last few months about the accuracy of online polling during the election season—substantially outperforming “gold standard” telephone research—there was not time to share ESOMAR’s September 2012 updated guide to purchasing online sample. The guide consists of 28 questions all purveyors of online sample should answer, publish, and make available to every buyer of its products and services. The guide has been updated to reflect rapid changes in online sampling over the last couple of years, including use of routers, real-time sampling, and blended sample from multiple sources.
Before purchasing online sample for your next research survey, be sure that you know the answers to these 28 questions:
1. What experience does your company have in providing online samples for market research?
2. Please describe and explain the type(s) of online sample sources from which you get respondents. Are these databases? Actively managed research panels? Direct marketing lists? Social networks? Web intercept (also known as river) samples?
3. If you provide samples from more than one source: How are the different sample sources blended together to ensure validity? How can this be replicated over time to provide reliability? How do you deal with the possibility of duplication of respondents across sources?
4. Are your sample source(s) used solely for market research? If not, what other purposes are they used for?
5. How do you source groups that may be hard to reach on the internet?
6. If, on a particular project, you need to supplement your sample(s) with sample(s) from other providers, how do you select those partners? Is it your policy to notify a client in advance when using a third party provider?
7. What steps do you take to achieve a representative sample of the target population?
8. Do you employ a survey router?
9. If you use a router: Please describe the allocation process within your router. How do you decide which surveys might be considered for a respondent? On what priority basis are respondents allocated to surveys?
10. If you use a router: What measures do you take to guard against, or mitigate, any bias arising from employing a router? How do you measure and report any bias?
11. If you use a router: Who in your company sets the parameters of the router? Is it a dedicated team or individual project managers?
12. What profiling data is held on respondents? How is it done? How does this differ across sample sources? How is it kept up-to-date? If no relevant profiling data is held, how are low incidence projects dealt with?
13. Please describe your survey invitation process. What is the proposition that people are offered to take part in individual surveys? What information about the project itself is given in the process? Apart from direct invitations to specific surveys (or to a router), what other means of invitation to surveys are respondents exposed to?
14. Please describe the incentives that respondents are offered for taking part in your surveys. How does this differ by sample source, by interview length, by respondent characteristics?
15. What information about a project do you need in order to give an accurate estimate of feasibility using your own resources?
16. Do you measure respondent satisfaction? Is this information made available to clients?
17. What information do you provide to debrief your client after the project has finished?
18. Who is responsible for data quality checks? If it is you, do you have in place procedures to reduce or eliminate undesired within survey behaviors, such as (a) random responding, (b) Illogical or inconsistent responding, (c) overuse of item non-response (e.g. “Don’t Know”) or (d) speeding (too rapid survey completion)? Please describe these procedures.
19. How often can the same individual be contacted to take part in a survey within a specified period whether they respond to the contact or not? How does this vary across your sample sources?
20. How often can the same individual take part in a survey within a specified period? How does this vary across your sample sources? How do you manage this within categories and/or time periods?
21. Do you maintain individual level data such as recent participation history, date of entry, source, etc., on your survey respondents? Are you able to supply your client with a project analysis of such individual level data?
22. Do you have a confirmation of respondent identity procedure? Do you have procedures to detect fraudulent respondents? Please describe these procedures as they are implemented at sample source registration and/or at the point of entry to a survey or router. If you offer B2B samples what are the procedures there, if any?
23. Please describe the ‘opt-in for market research’ processes for all your online sample sources.
25. Please describe the measures you take to ensure data protection and data security.
26. What practices do you follow to decide whether online research should be used to present commercially sensitive client data or materials to survey respondents?
27. Are you certified to any specific quality system? If so, which one(s)?
28. Do you conduct online surveys with children and young people? If so, do you adhere to the standards that ESOMAR provides? What other rules or standards, for example COPPA in the United States, do you comply with?
All of these questions deal with essential issues of rigor and quality. They are also the kinds of issues we carefully manage every day for every project we do for our clients. Given the customized nature of our work and the close involvement of senior researchers at every level of detail, Versta Research has become an “expert buyer” of sorts when it comes to online sample. Feel free to give us a call for any advice or recommendations you may need.
–Joe Hopper, Ph.D.