How do we know which numbers to trust and which health studies are sound? Healthland faces this dilemma every day, so we spoke with Charles Seife, the rare journalist with an undergraduate degree in mathematics, from Princeton no less.
In his book Proofiness: The Dark Arts of Mathematical Deception, Seife explores the common ways math can be used to mislead people.
Q: What is “proofiness”?
A: It’s the art of using math to tell untruths, the art of using bogus numbers or numbers that are semi-right to mislead.
One example I like is when Quaker Oats had a huge ad campaign to try to convince people that eating oatmeal could lower cholesterol dramatically. It put a graph on the back [of the package] that showed a dramatic decline in cholesterol levels. And there was a drop, according to a study. But if you look carefully, the Y-axis was manipulated so that a really very tiny drop looked huge, when in fact, there was only a few points [decline] out of 200.
And what is the phenomenon you call “randumbness”?
We humans have a hard time recognizing that things can actually have no [discernible] cause. They are random. The roll of the dice isn’t influenced by external factors [like wearing your lucky shoes]. That’s how Las Vegas makes all its money. People think that if they’re winning, they should keep doing [what they're doing]; if they’re losing they’re due to win soon. The universe doesn’t care whether you win or lose — things are just random.
So, it’s the fallacy that comes when we think something is [causally connected to something else], when in fact there’s no cause behind it. I like to link it to what I call cause-uistry — what happens when, say, there is a cancer cluster or you spot a group of people who have more than the expected number of a certain type of cancer. It may be that there’s a toxin or something causing it. But by the sheer fact that you are looking at the entire country, of course there are going to be some places where there is a more-than-average incidence of cancer. Just through random chance, in some places there will be an increase in cancer, and in some places it will be lower than expected.
So cause-uistry is a glib name for the fallacy that correlation equals causation. Just because two things seem to be related doesn’t mean that one affects the other. [Still] our brains lead us to connect things even if they are not connected. One fun example: when I was doing reporting on something else, a member of Congress tried to pitch me on building more power plants. He said that if you increase power production, then infant mortality drops. It’s true. It’s also true that when Internet use goes up, infant mortality drops. And car driving. Of course, those are all symptomatic of a high-tech society that has good health care.
So how can bad math make it look like bogus medicines are effective?
The real success of alternative medicine has to do with the fact that [randomness] can make a placebo look like it works. It happens in the pharmaceutical industry, too. But in homeopathy, where the medicines are so dilute that there’s not a single molecule of the active medication, it’s all placebo. [Still], there are clinically controlled double-blind trials that seem to show that it works. [That's because] if you run 20 trials, one by random chance will come up with statistically significant results.
Is this because the definition of “statistical significance” is arbitrary?
I think that it is a rule of thumb that has been taken way too seriously and is applied in situations where it is inappropriate. If you look at a typical paper in a medical journal, you will see that the paper tests seven, eight, nine hypotheses. Does this drug work against several symptoms, does it work in high doses, etc.? You’ve got a good half-dozen, even multiple dozens of hypotheses, all tested at [what is known in statistics as the .05 p-value, or significance level].
That makes no sense whatsoever. If you’re testing 100 hypotheses, that cutoff is going to fail because 1 in 20 times, the effect will occur randomly.
So what should be done?
I don’t think there’s an easy golden solution. The issue is that people have to take into account the context in which the medical study is presented. A single medical study is rarely by itself definitive. Looking at the literature and what’s expected [before] going into the trial and looking at what the trial adds — that is a more fruitful way of looking at it than saying one study proves or disproves something.
When you start to look closely at medical science, you see all of its flaws and that may lead some people to say, O.K., it’s just another way of seeing things — but it’s not better than just choosing what to believe.
This is the big irony of the book. I still believe that quantitative data is the surest way to get at what nature is telling us. Nature speaks in mathematics. We’re very lucky because we have all of this understanding, with incredible precision. But in some ways, we are betrayed by that knowledge. We have immense faith in math, but [it also shows us all of the uncertainties].
Health is particularly tricky because you’re caught between Scylla and Charybdis. The Scylla being believing nothing and Charybdis being believing everything. There’s no algorithm to tell you what’s right and wrong, you’ve got to use your brain.