Versta Research Newsletter

Dear Reader,

Whether you’re a seasoned research professional, or a newbie, or whether you just do a bit of research alongside other job responsibilities (like marketing or strategy) you will probably find something yet to learn in this newsletter with a feature article on 25 Things You Might Not Know About Research.

Here are some of these things you might not know: Why pie charts are a bad choice for your data … why stat testing NPS requires a special formula … why other-specify options are mostly useless … why qualitative research costs more than quantitative … why there is no such thing as a statistically significant sample size … and 20 more!

Other items of interest in this newsletter include:
We are also delighted to share with you:

… which showcases some of our recent work for Fidelity Investments about money and divorce, a survey about clergy health (with a video report!), and research for an Ad Council campaign on brain health.

As always, feel free to reach out with an inquiry or with questions you may have. We would be pleased to consult with you on your next research effort.

Happy winter,

The Versta Team

25 Things You Might Not Know About Research

Whether you’re a seasoned research professional, a newbie, or whether you dabble in research for other job responsibilities (like marketing or strategy) you will probably find something you don’t yet know in Versta’s 2020 list of 25 Things You Might Not Know About Research:

  1. Qualitative costs more than quantitative. The professional time invested in getting and analyzing qualitative data is far greater than for surveys and statistics. There are few economies of scale when conducting focus groups, ethnographies, or in-depth interviews. For a high quality qual-to-quant study, two-thirds of a project budget will typically be used for qualitative work, followed by one-third for quantitative.
  2. Tons of surveys are filled out by robots. There is a huge underworld of people trying to earn money by filling out surveys, and they often set up robots to automate it. The only reliable way to identify and eliminate them is to devote time and money to having a human research analyst (not an automated platform) carefully review and cross-validate data after it is collected.
  3. There is no such thing as a statistically significant sample size. You can conduct tests of statistical significance on any sample size, even very small ones. Quoting from the Reference Manual on Scientific Evidence, an excellent resource compiled by the Federal Judicial Center and the National Research Council: “Statistical significance … applies to statistics computed from the sample, but not to the sample itself, and certainly not to the size of the sample.”
  4. Pie charts are bad. Good charts help you visualize relative quantities and proportions of a whole. Pie charts are bad at that. Our eyes have difficulty comparing the surface areas of triangular shapes, even when they are right up against each other. This means that pie charts should be used in only limited circumstances when data are exceptionally simple. Otherwise, it is best to avoid them.
  5. 3-D charts are bad. Few of us in market research work in multidimensional spaces, so 3-D charts have no purpose other than to “Bring more creativity to your presentations!” or “Lift your charts above the ordinary!” In fact, 3-D charts nearly always distort proportions and make it more difficult to compare and contrast relevant data. Always keep your charts in flatland.
  6. Very large sample sizes are a waste. Good statistical point estimates can be calculated with sample sizes ranging between 300 to 2000. That is why most public polling rarely goes beyond that. If you love big numbers and decide to boost your sample size of 10,000 you gain just one percentage point in precision, for which you have easily doubled or tripled your cost.
  7. Large numbers are obvious. Marketing and PR professionals often say their dream is to have a survey showing 95% of their target audience does one thing or another. But we all live in the same world; if 95% of us were doing something, wouldn’t we know it already? Would research be needed to document it? Large numbers are rarely surprising. They are mostly obvious and mostly uninteresting.
  8. Statistical significance is arbitrary. The p-value we usually aim for in research is 5% (from which we construct 95% confidence intervals). But there is no magic about 5% except that it is the mostly-agreed-upon minimum in the scientific community. It represents a consensus—not an irrefutable truth—that this is an acceptable level of uncertainty in drawing conclusions.
  9. You can’t stat test NPS the normal way. Applying typical z-tests for percentages normally used in market research and opinion polling is not appropriate. You can do it (and lots of people do) but you will get the wrong answer. Instead, use a special formula for calculating an NPS margin of error. And then you probably ought to double that, because your margin of error is bigger than you think.
  10. There is more than one margin of error. In market research we mostly calculate margins of error and confidence intervals based on an assumption that data come from a normal sampling distribution. But that is not always true, and sometimes a faulty assumption. There are also error margins and confidence intervals for power distributions, exponential distributions, Poisson distributions, and more.
  11. Your margin of error is bigger than you think. Unfortunately we have all been taught to reference a survey’s margin of error in terms of sampling error. Why? Because it is the one source of error easily quantified although even that calculation is based on flimsy assumptions. In truth, there are many sources of error. A clever study recently published by a statistician and political scientist at Columbia University shows that most surveys have a margin of error about double what is typically cited.
  12. The best research is done by the government. By “best” we mean that it is super rigorous and gets as close as possible to the ideal of perfect sampling, validated measures, and rigorous analysis. You may think of governments as bureaucracies, but when it comes to research, the U.S. government does social and psychological research at a far superior level than anyone else, hands down.
  13. The best research in public opinion is done by academics. The U.S. government does tons of great research, but it rightly stays away from measuring public opinion. Academic researchers (not research firms) fill the gap with rigorous, expertly designed studies to dissect what Americans think and how their thinking evolves. They test multiple ways of measuring opinion (with many different ways of asking questions) to ensure that potential biases are eliminated or explained.
  14. The worst research in marketing is done by academics. That’s because academic researchers are trying to explore big questions about methods or grand schemes of how to conceptualize consumer attitudes or market behavior. Companies like Versta Research, however, answer your specific questions that drive marketing decisions. Academic research is vital, but applied research—and knowing how to apply academic research in the real world—it quite a different skill.
  15. A lot of research firms don’t do research. What do they do? They manage panels of respondents and sell access to firms like Versta Research. Or they build and manage software platforms and sell subscriptions. The problem (for them) is that research firms represent a small market, so they start pitching broader capabilities to just about everyone, pretending to do research. Skeptical? Take a look at this job ad from one well-known “research” company.
  16. Insights dashboards don’t work. Everyone wants insights; everyone wants dashboards. So if you’re a software company, why not sell insights dashboards? The problem is that posting data via nice-looking graphics does not deliver insight—it delivers only data. Insights come from brains that assess, interpret, and synthesize data within the context of questions that need to be answered.
  17. All vendors use the same suppliers. There are only a handful of software tools and platforms to choose from, and only a handful of panel companies through which to source sample. Every research firm buys from them, and usually these suppliers buy from each other as well. The only special and proprietary goods you can buy from a research firm are their brainpower and their expertise, which truly makes all the difference in the world.
  18. Adding “other” to surveys is useless. Some researchers, strategists, and clients add “other-specify” options to their surveys all the time. It seems to make sense, because what if you missed something, and none of your answer options are right? The problem is that respondents rarely tell you, and there is not much you can do with the data from those who did. Use them sparingly.
  19. Probability samples are exceptionally rare. Research methodologists working in market research and opinion polling talk a lot about the importance of probability sampling. But in reality such samples almost never exist. This is because even if you design a perfect sampling method, most respondents will not participate. Without a 100% response rate, it is not a probability sample.
  20. Phone surveys do not use probability samples. Proponents of phone surveys often tout the superiority of phone versus online surveys because, theoretically, nearly everyone in this country can be reached by randomly dialing a phone number. That might be true, but 99% of them will not answer; if they do answer, most will refuse to talk with you. Phone survey response rates are 1% these days, which means they are not even close to being probability-based anymore.
  21. Your chance of being polled is extremely small. True, we all get thousands of annoying surveys from companies that obsessively want to know more about themselves. But if a research firm like Versta Research conducted a nationally representative survey using random sampling, reaching out to 1,200 adults every single day, what are the chances are of you being included over the course of that year? Just 0.2%.
  22. Survey respondents are not like you. In the marketing and research industries, and among the professional clients we work with, most of us have college degrees. But take a look at U.S. Census data: Just 33% of U.S. adults have Bachelor’s or postgraduate degree. One-third (30%) have an entire household income of less than $50,000. When interpreting and assessing research data, it is worth remembering that most survey respondents are probably different from you.
  23. Bad data looks frighteningly like good data. This should scare you as much as it scares us at Versta Research. Robots can fill in survey data, and guess what? All the numbers you get back are between 0% and 100%, just like good data. There is nothing obviously wrong with most bad data, so it takes a lot of manual data forensics, ingenuity, and high level analysis to ferret it out.
  24. There is no anonymous research. Even if your survey does not ask for personal information like name, phone number, or address, chances are it asks for demographic information. If the survey asks for as few as 15 demographic attributes, a high-powered algorithm can correctly identify specific individuals 99.98 percent of the time. Keeping all your research data confidential and secure is therefore critical no matter how anonymous you believe it to be.
  25. Most survey takers tell the truth. We rightly complain a lot about fraud and data quality in the survey research industry. But once you get beyond the very small number of bad actors who generate disproportionate amounts of bad data, a beautiful reality of survey research is that most people want to share their opinions, and they will tell you the truth. That is something we all should celebrate and honor with research that tells the truth as well.

Stories from the Versta Blog

Here are several recent posts from the Versta Research Blog. Click on any headline to read more.

There Is No Anonymous Research Anymore: How Algorithms Can Find You

Stripping data of names and PII is no longer enough to guarantee anonymity of survey respondents. New algorithms can identify people even in US Census data.

This Is What “Sugging” Looks Like: A Fake Survey that Is Unethical and Hurts Real Research

I received a “survey” invitation last week that offers to pay me $50 for participating. It is an obnxious attempt to generate sales leads—known as sugging.

Polish and Revise Your Sentences to Make Your Research Reports Better

One secret to writing great market research reports is to revise all sentences so they are direct, simple, and forceful—unlike these lousy examples.

8 Morphing Methods as Market Research Shifts to UX Research

Market research isn’t dead. It just keeps moving around and re-inventing itself as organizational needs for primary research change, as this graphic shows.

Fix Your Jargon-Filled Survey with Ordinary Words Real People Use

Researchers from the academic world recently tried to improve upon current CX measures with a new set of survey questions. Ugh, wait until you read them.

6 How-To Books for UX Research

Here are six recently published books on the methods and processes of conducting UX research, which has strong applicability to what we do in market research.

If You Want Expertise, Forget Artificial Intelligence

DIY survey platforms can be great, but the gimmicks they add and try to sell you are often nonsense. This one uses AI for an “expert review” of your survey.

Respondent Burnout is Killing NPS

Low response rates are not always a problem in research, but they could be killing your efforts to measure and track NPS, depending upon your customer base.

How Polls Pass CNN’s Quality Review

Surveys for media distribution require rigorous methods that withstand tough scrutiny. CNN has the best checklist of criteria I have seen in recent years.

Research Lessons from a Puppy: Why You Need Balanced Scales

It is shocking to me, but not everyone finds my new puppy adorable (just take a look at his picture!), which is a reminder that survey scales need to capture a range of potential opinions.

Versta Research in the News

New Research for Fidelity Investments on Money & Divorce

Fidelity Investments commissioned Versta Research for a new study on how people navigate the process of divorce financially and emotionally. Findings have so far been featured in stories by Reuters, Prevention, and Financial Advisor Magazine. Full details are available from Fidelity’s press release and Divorce and Money Study Fact Sheet.

Video: Health Research Report from Wespath Benefits and Investments

Wespath Benefits & Investments reported findings from its 2019 Clergy Well-Being Survey conducted by Versta Research. For this fifth wave of research, they brought in a creative team to develop a report of findings with a 4-minute video. Also available are a summary published in the Dimensions newsletter, and a one-page infographic.

Research for Lincoln Financial Group Highlighted for Long-Term Care Awareness Month

November was National Long-Term Care Awareness Month, founded in 2001 by the American Association for Long-Term Care Insurance. Lincoln Financial Group highlighted selected findings from two surveys it commissioned from Versta Research.

Ad Council Campaign Encourages Families to Discuss Brain Health

Versta Research conducted a survey in support of a new public service advertising campaign launched by the Ad Council in partnership with the Alzheimer’s Association. The findings and ad campaign were described in the Washington Post, and the ads can be viewed on the Ad Council website.

MORE VERSTA NEWSLETTERS