A Statistically Significant Cartoon
In Turning Data into Stories we pointed out that numbers have no inherent meaning. There are no essentially big or small or significant numbers, for example—they only become big or small or significant within the context of specific research questions and a range of possible answers.
The same is true for p-values by which we measure statistical significance. This means that what counts as “significant” can be open to multiple interpretations, as this cartoon illustrates so well.
In most scientific communities, the standard p-value at which to declare statistical significance is .05. A finding is considered significant only if the probability that we are seeing an illusion in our data is less than 5%. But of course that’s an arbitrary convention. Which means that some researchers start fudging with the 5% cutoff point for the sake of delivering “highly significant” findings, or for the sake of finding something worth reporting in data that may well be random.
So what’s wrong with that? True, there is nothing magic about 5% as the cutoff versus 7% or 8%. But if we start adopting different p-values from the standard, it is critical that we and our clients understand the implications. When I see p-values of 10% reported in market research reports, I wonder if clients really understand that this doubles the usual chance of reporting an illusion.
Our standard at Versta Research is to stick with p=.05. It’s cautious and it’s consistent with scientific convention. It helps ensure that the stories we’re telling with our data are true and useful, not just “interesting.”