The Magic Numbers . . . . Reappear!
Last quarter we wrote about Magic Numbers in Market Research—those arbitrary rules of thumb and cut-off points we use when quoting things like minimum samples sizes or how many people to include in a focus group. Presto! Like magic, the issue appeared in the New York Times a few weeks ago, this time related to a dispute about the best way to statistically test for the existence of ESP.
The backstory: A respected academic journal in social psychology published an article showing data that suggests ESP exists. Horrified, some researchers argued that psychologists were using old-fashioned inferential statistics when they should be using modern-day Bayesian statistics. Here is a link to the article, if you’re interested. Unfortunately, it does a lousy job explaining what Bayesian statistics is.
But fortunately, in response, the editor in chief of The Annals of Applied Statistics submitted a letter to the New York Times clarifying that all statistics ends up relying on arbitrary magic numbers:
The heart of the dispute is not about Bayesian versus classical statistical methods; if anything, it is an argument against knee-jerk use of the famous .05 criterion, which generally finds the results of an experiment acceptable if the chances are no greater than 5 percent (that is, 0.05) that they could have occurred randomly.
Physicists, for example, don’t trust .05 and prefer much tougher evidential levels. A claimed result that overturns all ideas of causality might well require something stricter than .05. A Bayesian would have to make the same kind of difficult choice as to what “prior probability” to assign to the existence of ESP.
No general formula will free the scientist, or anyone else, from having to use judgment in interpreting evidence. But general formulas, including .05, are valuable in imposing some order on the Wild West world of claimed results.
Now there’s a statistician after our own heart. We, including market researchers, all use magic numbers, because in a strange way, magic numbers work. They help us impose order and make sense of the messy reality we are trying to understand. But ultimately there is nothing magic about them, and there is no magic solution that can substitute for seasoned judgment and expertise.
That’s why you come to us, right? It’s easy to push a button that tells whether data is statistically significant. It’s not so easy to discern the story behind that data, whether that “statistical significance” really matters, and what you should do with it. That’s the magic of the work we do at Versta Research.
—Joe Hopper, Ph.D.