Versta Research Newsletter

Dear Reader,

If researchers are so smart, why was the election of Donald Trump such a surprise?

Here are a few answers: because the polls measured the wrong votes; because we were easily swayed by fancy (and suspect) mathematical models; because we relied too much on quantitative instead of qualitative research.

There are many lessons to be learned from the failure of research to prepare us for the outcome of this election. We highlight several such lessons in this edition of the Versta Research newsletter, Survey Says … Trump Won? Research Lessons from the Polling Mess.

Other items of interest in this newsletter include:
We are also delighted to share with you:

… which highlights our recent survey of worker productivity for Fellowes Brands, and our top-read article published in Quirk’s a few months ago.

As always, feel free to reach out with an inquiry or with questions you may have. We would be pleased to consult with you on your next research effort.

Happy winter,

The Versta Team

Survey Says … Trump Won?
Research Lessons from the Polling Mess

It wasn’t long after the shock of election day that a colleague asked, “What do you think about the validity and accuracy of surveys and polls now? I’d say they’re all hogwash.” She was not alone. “The vitriol targeting pollsters in the last few days has been intense and ugly,” wrote another colleague via the AAPOR online discussion forum.

At Versta Research, we focus on work outside of election polling, but of course any survey research is akin to public opinion polling, and our methods are the same. If election polling provides the proof that survey methods work (as we have maintained in the past) what are we to make of Donald Trump’s 2016 win being a surprise?

This election put research methods to the test and it subjected them to public and professional scrutiny like never before.

Here is our take. This election put research methods to the test and it subjected them to public and professional scrutiny like never before. There is much to learn about what works and what does not. There is also much to learn in the ongoing disbelief (and perhaps misperception) that survey methods got it wrong. Some things went right, and some things went wrong, for sure. But what?

Here are five lessons for market research worth contemplating.

1 Surveys Work. And they work extremely well. This may sound ridiculous in the wake of pollsters’ failure to predict Trump winning the White House, but the polls did not fail. It was the attention-hungry people who interpreted, reported, and prognosticated based on the polls that failed, and they failed miserably.

Poll resultsClinton got 48% of the national popular vote. Trump got 46%. Clinton won the popular vote by a comfortable margin, and nine out of the ten top polls correctly predicted this. On average, the top ten polls had Clinton winning the popular vote by 3 percentage points. She won by 2 percentage points.

If you do not find this remarkable, you should. Despite the enormous challenges polling faces today with plummeting response rates and the unattainability of probability samples, the polls—both those conducted online and those conducted by phone—worked.

Suppose you could have a fancy market research tool that predicted, within a percentage point or two, how many of your customers would buy your new product over a competitor’s. Would you want it? You can have it. Well-done and rigorously executed surveys do exactly this.

2 Weight Your Data. The polls were surprisingly accurate, but they got the election wrong. We all know why, right? Because election polling measured the popular vote, but it is the Electoral College that chooses the president. Despite the popular vote, only 42% of electors voted for Clinton, while 57% of them voted for Trump.

Because of the strange ways in which electors are chosen and cast their votes, every popular vote for Clinton was, in effect, down-weighted to .87 and every popular vote for Trump was up-weighted to 1.24. All votes are not created equal.

The inequality of votes is something we know all about in market research and it is a good reminder of why we weight, and how important it is to think through it carefully. Weighting data is all about making sure that the people we have in our data accurately reflect the population of decision-makers we care about. If my survey is about buying cars, I need to ensure my sample matches the car-buying population. If I have too many of a certain demographic group in my sample, their votes count less. Weighting makes that happen.

All pollsters (we hope) weight their data to bring sampling into alignment with the true population of voters. But what if, after weighting their samples to the population of voters, they weighted to the population of electors? If their samples were big enough (most of them aren’t, but surely they could be) then polling may have better reflected the population of electors.

Easier said than done. And I say this with trepidation, because the fancy election forecasts did try to account for the electoral college, though in different ways. Which brings us to our third sobering lesson from the 2016 election polling debacle.

3 Beware the Math-Meisters. In the months leading up to the election, I looked at the New York Times’ election forecast only once. It struck me as absurd, and so I never looked again. And it convinced me never to look at Nate Silver’s Five Thirty Eight election forecast either—Mr. Silver being the math-meister inspiration for the NYT’s efforts.

As if polls are not tricky enough, these election forecasts are complicated mathematical models fed by polling data and other “fundamentals.”

As if polls are not tricky enough, these election forecasts are complicated mathematical models fed by polling data and other “fundamentals” (like economic data) to arrive at probabilistic statements about who will win. On July 19, Clinton was declared to have a 76% chance of winning. On election day, her chances were up to 85%.

But what on earth can such numbers mean? Does it mean that if we were to hold the exact same election 100 times, Clinton would win 85 times? No, that’s absurd; the election happens only once. Does it mean that in the history of all our presidential elections (there have been only 56 of them) the Clinton-like candidate won 85% of the time? But wait, we’ve never had a Clinton-like candidate, nor a Trump-like candidate before now.

Fear not, the forecasters gave us helpful guideposts to make sense of it. In July, Clinton’s chance of losing was “about the same probability that an N.B.A. player will miss a free throw.” And on election day her chance of losing was “about the same as the probability that an N.F.L. kicker misses a 37-yard field goal.”

If you’re not laughing at this, you ought to be crying. These numbers are absurd, and the precision they communicate is both absurd and misleading. As much as I love building mathematical models—and we do make good use of them in our work when appropriate—it is no wonder that the public feels betrayed and our clients roll their eyes when we talk about margins of error.

4 Qualitative Is Critical. A shortcoming of nearly all surveys is that quantitative research rarely gives us a deep feel for what drives the numbers. This election highlights that more than ever. There will always be disappointed voters and “Don’t Blame Me—I Voted for the Other Guy” bumper stickers, but this one seems different. There is genuine disbelief that Trump won, and genuine disbelief that so many voters could align with a menacing vision as articulated by his campaign.

Unfortunately, survey data doesn’t help much. We know the demographics of who voted for whom, and the geographies and economics of where they live. But none of it gives a deeper sense of who, why, and how. With the right kind of research, we can and should be saying “Of course, all of this polling data makes sense.”

With the right kind of research, we can and should be saying “Of course, all of this polling data makes sense.”

Good qualitative research might look like J.D. Vance’s Hillbilly Elegy, which offers a first-person account of “what a social, regional, and class decline feels like … for a large segment of this country.” Or it would take a deep sociological approach like Arlie Hochchild’s Strangers in Their Own Land. In Hochchild’s words, “Hidden beneath the right-wing hostility to almost all government intervention … lies an anguishing loss of honor, alienation and engagement in a hidden social class war.”

Market research is no different. We are increasingly dazzled by the promise of more and more data, all of it immediately accessible, transformed into “insights” with newer technologies. We have increasingly sophisticated computational models at our fingers tips and with free open-source software, no less. This election demonstrated how quantitative data can (and will) fall on its face if that is all we do. We need focus groups, in-depth interviews, design labs, and ethnographies. We need really good, insightful qualitative research, or the numbers just won’t make sense.

5 Do It Only If It Matters. Election polling puzzled me in my years before a research career. It seemed like constantly shifting numbers meant that polling was baloney, or alternatively, that the polls were measuring Jell-O. Either way, who cares? The only election poll that matters is the election itself. Soon enough, we will all know who won, so what is the value of predicting it ahead of time?

From my vantage point today I can think of lots of reasons that polling might be valuable. If it is commissioned by a campaign for internal use, polling helps candidates understand what matters to voters and how to make their messages resonate. When polling asks about issues beyond picking candidates, it can offer valuable insight that ought to influence public policy. And given that polling works, it can reinforce the validity of elections when losers cry foul, or it can provide evidence of fraud when elections are rigged.

No matter what research or survey work you are doing, ask yourself whether it matters. Specify how it will be used.

But beyond my professional curiosity and wanting to learn from them as much as I can, I have a hard time seeing much value in the polls we witnessed in the last 12 months. Did they matter? Could we do anything with the results? Are we better off for having that constant view-in-advance of what might happen? I have an harder time even understanding the utility of a mathematical model that aggregates all those polls into an ongoing probability of outcomes. Professionally, all of it is fascinating. Personally, not so much.

Of course most of us in market research, and most of you, dear readers, are not in the business of election polling. But there is a lesson to be learned. No matter what research or survey work you are doing, ask yourself whether it matters. Specify how it will be used. Identify specific decisions that need to be made. Know in advance how decisions are contingent upon the findings you will report.

If you find yourself scratching your head and can’t specify exactly how the research will be used, then shift your budget to something else so that the research you do will matter.

The most important lesson for market research from this year’s election is that our basic methods of inquiry are fundamentally sound. But for sure, we need to be vigilant about who we are measuring and how. We need to triangulate with non-mathematical approaches that help explain the numbers. And we need to think more deeply about what we are doing and why.

However you may feel about this year’s confusion of presidential polling and predictions, we hope you have been giving deep thought to the various implications for your research. We have. If you need help putting all of that into action, Versta Research is here for you.

Stories from the Versta Blog

Here are several recent posts from the Versta Research Blog. Click on any headline to read more.

CASRO+MRA = Bland Insights

Two leading research industry groups (CASRO and MRA) have just merged and taken on the milquetoast name “Insights Association.” What ever happened to RESEARCH?

How Many Bots Took Your Survey?

Probably more than you think, and we see the problem infiltrating data in new, surprising ways. Here is how to spot it, expunge it, and block it in the future.

The Versta Crystal Ball: Research Trends in 2017

Our predictions for market research in 2017 focus on an influx of design thinkers, strategists, and communications as research (happily) broadens its influence.

What Rose to the Top in 2016

Versta Research has published over 400 articles about doing research since we opened our doors in 2009. Here are the ones that got the most attention in 2016.

Versta’s “Habits” Tops Quirk’s 2016 List

Versta Research’s “Nine Habits of Great Market Research Vendors” from our recent newsletter was at the top of Quirk’s most widely-read articles of 2016.

A Fun and Easy Way to Try R

Sure, you hear everyone talking about R. But have you tried it? Here is a fun and easy way to get your feet wet: 4 easy steps to generate a holiday greeting.

When Strategists Write Questionnaires

Strategists excel at knowing exactly what data will help meet their business objectives, but they need ample input from researchers on how best to get that data.

The Best Way to Stop Survey Cheaters

If you use a survey as a “quiz” to measure factual knowledge, roughly one in seven respondents will cheat. But will it affect your data? And how do you stop it?

Why You Need 4-Point Scales

4-point scales are better than 5-point scales for survey research findings that need to be communicated directly and simply without misleading your audience.

200 Chart Choices with R

This website offers a gallery of more than 200 unique charts (along with the programming code!) that you can easily build with R and adapt to your own data.

How Much Incentive You Should Pay

Recent data published from consumer survey research shows plateaus in response rates as incentives go up. Meaningful boosts happen at $3, $7, $10, $15, and $20.

Google Surveys Stumbles with Sneak Peek Mess

Sometimes data visualization is too compelling for its own good. People are impressed by the beautiful design, and look right past the nonsense being presented.

Versta Research in the News

Survey of Office Workers Highlights Productivity Challenges

Versta Research conducted Fellowes’ biennial “Productivity in the Workplace” survey, which focuses on current productivity challenges. This year’s survey also highlighted important generational differences.

Versta Launches Research for Wespath Benefits & Investments

Versta Research has been working with the benefits and pension group of the United Methodist Church since 2009 to understand and track clergy well-being. The 2017 wave of this biennial effort just launched.

Versta Research’s “9 Habits” Tops Quirk’s 2016 List

Our feature article on the nine habits that make for really great vendors in the market research supply chain was published by Quirk’s in August. It became a top-read article for 2016.

MORE VERSTA NEWSLETTERS