The Joys of Being a Health Care Skeptic (One of a Series)
Medical Research: A Play in Five Acts The True Numbers for Breast Cancer A Chart of Death Trends Most Research Findings Are Wrong
Dear Readers,
I am a fact junkie when it comes to health care. That means numbers: the hard, non-squishy kind. But while good numbers can guide prudent decisions in our health care, we suffer a daily barrage of fake, hyped numbers.
They come from all directions: Advocacy groups who want to persuade us that their issue is really, really important, or manufacturers who claim their product is the latest and greatest for healthy living, as long as we take it every day, or researchers who want to convince themselves and us that the project they have labored on for years has generated interesting important findings.
Whatever the source of the numbers deluge, I think it pays to be skeptical. And I say there is joy in skepticism, because the conclusion from a skeptical inquiry often is that we’re doing just fine as is, thank you very much.
So this month, I offer a few reports about a skeptical approach to new research findings and to hyped statistics. Read on for more details.
As before: Feel free to “unsubscribe” on the button at the bottom of this email. But if you find it helpful, pass it along to people you care about.
The Natural Life Cycle of Medical Research: A Play in Five Acts
The news recently had two reminders in a single day of why statisticians are our friends and allies when it comes to getting the right health care and avoiding dangerous and over-hyped treatments.
The headlines:
* Hormone replacement therapy after menopause not only increases the risk of getting breast cancer, but also makes the cancer more deadly. Details here.
* Taking a daily fish oil supplement in pregnancy doesn’t make babies any smarter. Details here.
The arc of both stories is similar, and that’s no coincidence.
Act One: Medical scientists develop a new treatment that, based on then current knowledge, should work.
In hormone therapy, the idea was that estrogen protected women from heart and blood vessel disease. This was based on a statistical notion — since proven false — that there was a big jump in heart attacks and similar disease after menopause, which must mean (so it was thought) that the drying up of estrogen in the body with menopause was depriving the body of a natural protectant.
In fish oil, the idea came from observations that DHA, a key fish oil ingredient, is naturally transmitted to a fetus in the last half of pregnancy and is important to brain development. And premature babies, born with low supplies of DHA, did better in some studies if they received DHA supplements in the first few months of life.
Act Two: Hopeful “observational” studies are published. These involve dozens to hundreds of patients and have very favorable results for the treatment in question.
Act Three: Manufacturers make big bucks pumping the treatment in question.
Act Four: Medical scientists do the hard work of large-scale studies where patients are “randomized” to the real treatment versus a dummy (placebo) treatment.
This takes years of carefully following patients and comparing outcomes.
Act Five: Enter the statisticians.
They come in, crunch the numbers and discover: It doesn’t work (see the fish oil study) or worse, it causes a lot of harm too (the hormone story).
What’s the lesson for the rest of us? It pays to be skeptical of medical research findings, particularly when hyped by commercial interests.
Most people hear about research in the Act One, Two or Three stages.
If you wait till the story plays out in Acts Four and Five, you’ll be less disillusioned, and safer and wiser too.
The Non-Hyped True Facts about Breast Cancer
Among the most flagrantly misleading of the current health statistics currently floating is the “one in eight” number about breast cancer risk.
A PSA on 60 Minutes the other Sunday said that one in eight women will be diagnosed with breast cancer this year.
The actual number is one in 800. (This is from National Cancer Institute data on number of new breast cancer cases each year.)
The one in eight figure does have an anchor in reality — but just barely. It refers to lifetime accumulated risk for a woman who reaches her eighties – when it’s really no longer relevant (since women in their 80s presumably have lived a full life and no longer need to fear dying young from cancer or anything else).
The relevant risk that people should care about is premature death. The chance of premature death from breast cancer — i.e., dying before you reach your life expectancy of 80-plus years — ranges from one in 13,000 for women in their 30s to one in 1,300 for women in their late 60s.
I recently documented all these numbers on my blog site with data from the National Cancer Institute’s SEER project. The link is here.
Breast cancer deserves to be respected – but not irrationally feared. We need to watch out for the fake scary statistics.
An Interesting Chart: Historical Trends in Leading Causes of Death
This chart comes from the government’s most recent annual report. You can see that since 1950, we’ve made a lot of progress cutting deaths from heart disease and stroke, but very little on cancer, and some gains on diabetes have been reversed.
Most Published Research Findings Are Wrong (and Here’s Why)
Dr. John Ioannidis, an internationally regarded skeptic of medical research, wrote an elaborate mathematical proof, published in the on-line journal PLoS Medicine, to prove his provocative point that most published research findings are wrong.
Here’s Dr. Ioannidis’s own summary of the factors that go into the wrongness of most published research, in his PLoS Medicine essay:
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
A more accessible discussion of Dr. Ioannidis’s work is published in The Atlantic last month, by David Freedman. An excerpt:
Studies have gone back and forth on the cancer-preventing powers of vitamins A, D, and E; on the heart-health benefits of eating fat and carbs; and even on the question of whether being overweight is more likely to extend or shorten your life. How should we choose among these dueling, high-profile nutritional findings? Ioannidis suggests a simple approach: ignore them all.For starters, he explains, the odds are that in any large database of many nutritional and health factors, there will be a few apparent connections that are in fact merely flukes, not real health effects-it’s a bit like combing through long, random strings of letters and claiming there’s an important message in any words that happen to turn up. …
Even if changing that one factor does bring on the claimed improvement, there’s still a good chance that it won’t do you much good in the long run, because these studies rarely go on long enough to track the decades-long course of disease and ultimately death. Instead, they track easily measurable health “markers” such as cholesterol levels, blood pressure, and blood-sugar levels, and meta-experts have shown that changes in these markers often don’t correlate as well with long-term health as we have been led to believe.
On the relatively rare occasions when a study does go on long enough to track mortality, the findings frequently upend those of the shorter studies.
…And so it goes for all medical studies, he says. Indeed, nutritional studies aren’t the worst. Drug studies have the added corruptive force of financial conflict of interest.
To your continued health!
Patrick Malone
Patrick Malone & Associates