How Polls Pass CNN’s Quality Review
This list of 16 questions that CNN will ask before they are willing to publish your polling data is worth reading because somebody at CNN clearly knows their stuff. It is far better than anything I have seen from the New York Times, and it is far better than what the Associated Press (AP) sets as its standard.
It is good because it reflects a keen understanding of the issues we confront in survey research without being methodologically rigid. It addresses issues of survey mode, response weights, representation of a universe, different types of error margins, skews from education biases, and many other crucial issues. Notice also that it never refers to “response rates” (thank you CNN!)
It is a primer on all the important issues we need to think about all the time whenever we do any type of survey research. In short, every good researcher should be able to answer every question here, for every project she does.
The CNN Transparency Questionnaire
- What survey firm conducted the poll?
- How were respondents interviewed – by live interviewers on the phone, IVR, online, selfadministered questionnaire or another method?
- Who paid for the survey and why was it done?
- How many people were interviewed for this survey?
- In what language(s) were respondents interviewed?
- Please provide a copy of the full text and interviewer instructions/programming for all questions included in this survey release.
- When was your survey conducted?
- What is the source of your sample for this survey, and by what method were respondents selected? Please be as specific as possible, and if via web panel(s), please include a description of how the panelists were recruited. If your study was conducted online and included respondents chosen via routers, approximately what percentage of respondents were directed to the survey via routers?
- If any quotas were applied to sampling or interviewing, at what stage were they applied, what variables and targets were used, and what is the source of your estimate of the target quota?
- What is the universe of people you are trying to survey, and what makes you confident that the sample source represents that universe?
- If surveys were conducted by telephone, what percentage of interviews were conducted via calls to cellphones? If surveys were conducted online, were respondents allowed to complete the survey via mobile browsers, and approximately what share of your respondents did so?
- If surveys were conducted by telephone, how many callback attempts did a sampled number receive before being retired?
- If surveys were not conducted by a live interviewer, what do you do to ensure your respondents are real people and are paying attention to the survey?
- What is your estimate of this survey’s error, how is it calculated, and why is this an appropriate error estimation for your survey? If you are reporting a margin of sampling error, has it been adjusted for design effects?
- If your survey has been weighted, please list the weighting variables and the source of the weighting parameters. If your survey has not been adjusted for education, please explain why and provide an unweighted frequency for education distribution among your respondents.
- Is there a minimum unweighted sample size you require before releasing any subset estimates, and if so, what is it?
—Joe Hopper, Ph.D.