Versta Research Newsletter

Dear Reader,

Is your research good enough for The New York Times? Probably not, unless you are collecting your data by telephone, that is. By telephone? Yes, as odd as that sounds nowadays. The Times and a handful of other news organizations have guidelines that make it difficult for even the best work and most interesting discoveries to find their way into print if data were collected through online surveys.

In our view, the rationale for their guidelines is out-dated; indeed the majority of polling and research is done online these days. But it is worth understanding the debate because it highlights the issue of probability sampling, one of the most important and interesting methodological issues in research.

Other items of interest in this newsletter include:
We are also delighted to share with you:

Need help figuring out the best approach for your research? Please read on, and call us at 312-348-6089 with any additional questions you may have.

Happy Autumn,

The Versta Team

Is Your Research Good Enough for The New York Times?

Whether your goal is getting research into the board room to influence top decision makers, or in front of the public to promote your brand, you need to know the standards of rigor against which the research will be judged. Is it good enough to withstand the scrutiny of industry experts? Good enough to stand up to the questions of an executive ready to make a multimillion dollar investment? Good enough to be quoted as an authoritative source in The New York Times?

Corporate executives and academic journals typically avoid setting up rigid standards against which to judge research because the most insightful research is usually an artful combination of standardized best practices and innovative methods. But news media face a different problem. It is far too easy to conduct biased public opinion polling, especially now with online panels and social networks, and thus editors and reporters are inundated with self-serving news releases based on bad research.

As such, some news organizations like the Associated Press (AP), The New York Times, and ABC News have developed guidelines for what counts as valid and reliable research. It is worth understanding the guidelines even if you do not pitch to the media because the guidelines provide a fascinating glimpse into current methodological debates about probability sampling, inferential statistics, and the rapidly changing world of online data collection and analysis. If your organization does use research for public relations and marketing, then a deeper understanding will help you offer recommendations to your organization about conducting PR research.

The Crux of the Issue: Probability Samples

We are often asked by our clients whether it is it true that the AP refuses to carry stories from online surveys, and if so, why. Yes, it is true, mostly. Their standard is that for a survey or public opinion poll to be valid and reliable, it must be conducted by telephone. As odd as that seems, the reason is that most online surveys rely on non-probability samples. In contrast, telephone surveys (at least in theory) rely on probability samples.

In a probability sample, every person in the population has a chance of being selected into the sample. In a non-probability sample, not every person has a chance. For purists, this matters because non-zero, calculable probabilities underlie the statistics of being able to project from a sample to a population. It sounds complicated but it boils down to this: If you randomly dial numbers on your phone, theoretically it is possible to reach any adult American and have them participate in a survey. On the Internet, this is not possible.

The AAPOR (American Association of Public Opinion Research) Task Force on Online Panels noted in their March 2010 report that “there currently is no generally accepted theoretical basis from which to claim that survey results using samples from nonprobability online panels are projectable to the general population.” As such, The New York Times states in its Polling Standards, published in May 2011:

In order to be worthy of publication in The Times, a survey must be representative, that is, based on a random sample of respondents. Any survey that relies on the ability and/or availability of respondents to access the Web and choose whether to participate is not representative and therefore not reliable.

And the current AP Stylebook says:

Only a poll based on a scientific, random sample of a population – in which every member of the population has a known probability of inclusion – can be considered a valid and reliable measure of that population’s opinions.

In a recent phone interview with Jennifer Agiesta, deputy director of polling for the AP, we confirmed this means the vast majority of surveys conducted online are not considered valid and reliable, and AP advises against reporting them.

In Search of a True Probability Sample

The puzzling thing about this requirement is that very little social scientific, psychological, public opinion, or marketing research actually uses probability samples. They are an ideal rarely attained, and yet our research still produces robust findings. Indeed, there are three strong arguments against the idea that only phone surveys are valid and that all surveys must utilize probability samples:

First, online surveys work. Comparative research conducted over the last few years has shown that when rigorously designed, executed, and analyzed, there are few differences between phone surveys and online panel surveys. Indeed, the AAPOR report does not say that online surveys are inaccurate, but that the theoretical basis for projecting sample statistics to populations has not been worked out. In our view, that is not a sufficient reason to reject methods that work and that have strong empirical support.

Second, phone samples are not true probability samples. A large majority of people reached by phone either do not answer their phones or refuse to participate in surveys; these people do not have a non-zero and calculable probability of being included in a survey. Advocates of phone surveys point to empirical research showing that significant levels of non-response do not necessarily affect outcomes. That is true, just as empirical research shows that non-probability samples from online panels do not necessarily affect outcomes.

Third, not all research should be done with probability sampling. A great deal of rigorous academic research, vetted by top experts and published in journals, does not use probability samples. We have done this type work using a variety of survey modes and sampling strategies, and we have published it in academic and medical journals. In fact, The New York Times and other media outlets cite this type of research all the time, including some of our own work. Even their top polling experts publish research that uses online methods.

What You Need to Pass Muster

But if your goal is to get a story circulating via the AP or The Times, or at least not have them reject it out of hand, their standards rule. Debating those standards with an editor or reporter is not likely to help (yes, we have tried). As such, we offer these five suggestions for planning and pitching survey research:

1. Conduct the survey by phone. Rigorously executed phone surveys ideally rely on probability samples, which is key. If possible, sampling should include mobile phones as well (which may double your costs), but for a number of odd reasons, even 100% landline phone surveys are still viewed more favorably than Internet surveys.

2. Use real people for interviewing. Some firms conduct polls and phone surveys with automated interactive voice recording (IVR). But there is no way for IVR systems to randomly select a member of the household for interviewing, or to ensure that children that are not providing responses instead. Robo-polls and IVR surveys are not considered valid and reliable, so avoid them.

3. Provide methodological details. In your press release or when pitching the story, include information such as survey mode, the number of people interviewed, sampling procedures including any stratified design, weighting procedures, the dates of data collection, and the margin of sampling error both overall and for any key subgroups that are reported.

4. Identify the sponsor and fieldwork provider. Surveys are usually sponsored by organizations that have a business or political interest in the topic, so it is important to identify that sponsor and, if necessary, to explain their interest. Most sponsors hire third-party research firms to actually conduct the polling. Be sure to identify that firm. It adds credibility to the research, especially if the research firm is a leader in reputable industry organizations such as AAPOR (the American Association of Public Opinion Research) or NCPP (the National Council on Public Polls).

5. Be prepared to provide all survey data. Besides wanting to know the key statistics that support the storyline, reporters and editors may ask for a copy of the questionnaire itself to see how questions were worded and whether the order of questions may have introduced bias. They may also ask for a marginal report that shows responses to each question in the survey.

The research itself should follow a number of best practices as well, including appropriate criteria for sample size, calling frequency, random digit dial protocols, and so on. For additional details on these, see our article on How to Conduct a Telephone Survey for Gold Standard Research.

The Story Matters Most

Keep in mind that these are the standards for a small minority of media organizations, and that plenty of other publications do carry stories from online research. Moreover these standards are about methods that do not speak to issues of content. Even most phone surveys are not written about in The New York Times or the AP because what matters most is that the research be insightful and relevant, and that the statistical data have been skillfully turned into a compelling story that readers care about.

That’s where we at Versta Research can help you most. Beyond advising you on the best research mode for your campaign strategy and conducting rigorous research that can withstand the highest levels of scrutiny, we turn data into stories. We guarantee that your research will come to life in a way that will grab whatever audience or internal clients you are trying to reach.

For a fuller exposition of specific standards to pass media muster, we recommend reading The New York Times Polling Standards, and the AP Stylebook (entry “polls and surveys”).

Stories from the Versta Blog

Here are several recent posts from the Versta Research Blog. Click on any headline to read more.

Using Avatars & Robots for Survey Research

Online avatars may one day replace human interviewers to gather survey data, but technology will not likely replace human data analysis and interpretation.

A Better Way to Scale MaxDiff Utilities

MaxDiff scores are often transformed onto a 0 to 100 scale, which makes it hard to compare across related MaxDiff experiments. Do this scaling instead.

Three Mistakes to Avoid on Data Charts

When creating data charts for market research, include only elements that are necessary to the story. We suggest avoiding 3-D, grid lines, and too much data.

Pigeons Beat People on Probability Problems

Market research can be difficult to grasp and communicate because it involves probabilistic reasoning, which research has shown is difficult for most people.

Have a Cookie with Your 401(k)

New research in social psychology suggests that the mental work of making multiple decisions decreases our subsequent ability to make good financial decisions.

Smartphones Matter More than Cell Phones

Smartphones are a crucial turning point for researchers not because of methodological issues, but because they are fundamentally changing consumer behavior.

Cell Phones May Double Your Survey Costs

Including cell phones in RDD phone surveys is now critical, but it increases costs. Research from AAPOR shows that cell phone interviews cost at least double.

The Pitfalls of Auto-Coding Text Responses

Coding responses to open-ended questions requires high level thinking about how the data answer key questions. Dumping data into “topic” buckets is insufficient.

Research Trends in Cross-Cultural Marketing

Understanding similarities across cultural segments is an important shift for market research, which too often focuses on statistically significant differences.

The Most Persuasive Way to Present Data

This article describes recent research showing that how statistics are presented has a huge effect on how audiences interpret information and make decisions.

Top 5 Picks: Best Articles on Market Research

These five top articles on market research focus on how to design, field, and analyze research data so that the findings get heard and used in the boardroom.

How to Boost Response Rates for Online Surveys

Surprisingly, offering respondents a choice of survey mode (mail vs. Internet) does not improve response rates. But using multiple recruitment modes does.

Fifteen Basics of “Brand Smart” Research

Research is an essential component of effective brand marketing because so much of it relies on understanding your customers and how they relate to your brand.

Entrepreneurial Advice: Rethink Your Research

A recent study showing how entrepreneurs vs. corporate executives think about market research offers lessons for smarter approaches to market research.

Versta Research in the News

Versta President Elected to American Marketing Association Board

The American Marketing Association in Chicago announced election results and key appointments for its 2011-2012 Board of Directors.

Workers Uninformed about Pension Plans

With more than 42 million Americans participating in corporate pensions today, a Fidelity Investments survey (conducted by Versta Research) of corporate pension plan participants has found a widespread lack of awareness about how those plans work.

Nurses Feel Secure about Jobs, but Not about Retirement

A new study by Versta Research for Fidelity Investments provides insight into how the economy and health care industry changes have affected nurses’ perspectives on their profession and retirement.

Recently Published

Research on Consumer Misperceptions and Smoking Behavior

In 2007 Versta Research’s president led an effort to document misperceptions among smokers about the risks of smoking vs. the risks of quitting using nicotine replacement therapies. Four years later the data are still yielding new findings and being published in academic journals.

Unum Study: Employees Want Love over Money

Unum partnered with Monster.com on a study of HR executives and workers seeking new jobs. Versta Research worked behind the scenes on the analysis, interpretation, and writing to turn all of the survey data into a compelling story.

MORE VERSTA NEWSLETTERS