Our friends at HealthNewsReview.org are devoted to helping consumers understand what is real and what is puffery in claims about health-care interventions. The HNR website is a wonderful resource for learning how to evaluate the often breathless claims the popular media like to call “news.”
If you’re the kind of health news hound who wants to read a research study and understand not only the science, but be able to determine the quality of the study itself, a recent essay in the New York Times is your go-to teacher.
Written by Austin Frakt, a regular contributor and health economist, “How to Know Whether to Believe a Health Study” wants to help refine your critical thinking when you hear or read stuff like drinking coffee protects against colon cancer, or that taking a daily aspirin is good for your heart. Often, such advice is just attention-grabbing bluster, but as Frakt pointed out, “we may not know how to distinguish the research duds from the results we should heed.”
Any research published in a reputable journal is supposed to be fully vetted by experts in that field. Most lay people don’t have the scientific background to understand everything covered in a research study, and most members of the media who take on the responsibility of translating it for popular consumption don’t either. “Yet, if you’re not an expert,”Frakt wrote, “you can do a few simple things to become a more savvy consumer of research.”
Here are a few important things Frakt advises readers to consider:
- If the study examined the effects of a therapy only on animals or through a piece of lab equipment, it offers only limited insight into how it will work in humans. If a study did use human subjects, think about how widely applied the results can be: “What method did the researchers use? How similar am I to the people it examined?”
- Did the study analyze the harms as well as the benefits of the drug/device/procedure?
- Analyze the basis for what researchers call “causal claims” – X leads to or causes Y – and whether it pertains to a general population or only a specific one, such as sex, age, people with pre-existing conditions, etc.
The perfect study, Frakt noted, would rely on identical subject groups experiencing exactly the same conditions except that one gets the therapy and one doesn’t, so that comparing what happens would identify definitive causal consequence. That study doesn’t exist; like everything else, science isn’t perfect and the real world is always complicated.
Good research addresses this reality, Frakt explained, by refining methods “to infer what would happen to people who might be like you in two different circumstances, such as taking or not taking a drug.” The gold standard for these comparisons is the randomized controlled trial, in which subjects are randomly assigned either to receive the treatment or to be a control, when they get a placebo (inert or fake therapy) or nothing.
When each cohort is large enough, the two groups are likelier to be statistically identical to each other so that any changes that occur reasonably can be attributed to getting or not getting the treatment. Again, to understand how the treatment might affect you, pay attention to how well the study groups represent your age, gender, environment, medical history, etc.
So if a drug is being tested for how well it treats allergies and the groups didn’t include children or the elderly, you can’t reasonably conclude that people those ages would experience the same benefit, harm or side effects of the study groups. And most drug trials, Frakt noted, focus on narrow populations researchers believe are most likely to benefit from the medicine.
Frakt offered an example of what can happen when a drug with a proven trial benefit is given to patients who weren’t like the study subjects. “Based on the results of randomized trials that included only adults,” he wrote, “prescriptions of drugs known as proton pump inhibitors to infants with gastroesophageal reflux disease grew sevenfold between 2000 and 2004. Only later, in 2009, a direct study of infants found that those drugs caused them harm, with no benefit.”
A study that didn’t involve randomized subjects might be “observational” or “nonexperimental.” They use large data sets, such as Medicare records or very large surveys. Some are large enough to validate treatment results across many groups. Because they don’t generate new data, Frakt noted, these kinds of studies are less expensive to conduct and produce results more quickly.
“People like you are more likely to be represented in a nonexperimental database study, so your top concern might be whether the findings are valid,” said Frakt. “… it often compares groups of people who could have self-selected into receiving treatment or not. Maybe those who opted to receive it are systematically different – healthier, sicker, more careful, for example – and that’s what drives the findings. If so, what might appear causal isn’t, giving rise to the familiar “correlation does not imply causation.” (See our blog, “Confusing Correlation with Causation.”)
Frakt believes that most news reports tell you when a study is nonexperimental, but we’re not as certain. Pay attention, and if nothing is said about the study’s design and how its researchers adjusted for differences and tested assumptions, you’ll have to read the study to know. News reports also should include input from experts not involved in the research about whether the adjustments and tests were sufficient. That’s a judgment call, but made from an informed position. But as Frakt cautioned, “There is always room for doubt.”
So you can never be sure, whether you hear about it from the media or read the study yourself, whether its conclusions are absolutely valid, and whether they apply to you. Science is cumulative, the body of knowledge builds, and a single study is almost always meaningless-until its results are replicated through additional well-designed studies conducted free of conflicts of interest.
As Frakt concluded, “Few things are miracle cures, but when one shows up, we’ll see its signature in not just one study, but in many. Yes, that can take time. But if you want solid evidence you can count on, you cannot also be impatient.”