A few sound methods can protect you from rising tide of published medical bunk
Medical hype flourishes in the media-saturated modern world, with the internet testing consumer gullibility 24/7. But for a dozen years, an expert collective, based at the University of Minnesota School of Public Health, has battled the rising tide of health-related bunk with the online watchdog site healthnewsreview.org.
The careful, insightful work by the site and its contributors would be well served if patients stepped up to become more skeptical consumers. This isn’t hard. It can make a difference, improving not only medicine but also the care that we and our loved ones get.
Info graphic credit: Science magazine, based on Retraction Watchdatabase on increasing retractions of published scientific studies
Do we smell a rat? Not all published medical studies can pass sniff tests.
How unblinking can you be about medical research, when, in recent days, reports have cropped up about incidents like these in elite institutions:
Harvard Medical School and Brigham and Women’s, one of Boston’s leading hospitals, have been forced to retract 31 studies led by Piero Anversa, a once-celebrated cardiologist. His cardiac stem-cell research sparked a huge but unsupported shift in clinicians’ thinking about heart care. But it turns out, investigators said, the much-disputed works from Anversa’s labs contained false or fabricated data.
Dartmouth University investigated and disciplined for plagiarism Dr. H. Gilbert Welch, described by the New York Times as “one of the country’s most influential researchers in cancer screening” and the risks of its aggressive and excessive use, resulting in over-diagnosis and over-treatment. Welch disagreed with the university’s findings and quit.
The National Institutes of Health, one of the nation’s leading funders of medical-science research, was forced to shut down a $100-million study on alcohol use. The embarrassing collapse occurred after the New York Times reported that Dr. Kenneth J. Mukamal, an associate professor of medicine at Harvard Medical School who served as a key advocate and then lead investigator of the planned research, had worked with Big Alcohol in unacceptable ways in paying for and shaping the research, which critics said leaned from the outset to finding health benefits in moderate drinking.
What’s going on here? It’s simple: Americans spend more than $3 trillion annually on health care-related costs, making medicine a big, lucrative business. Not only do doctors, hospitals, and academic medical centers compete to build patient volumes, they do so by emphasizing their caregivers’ expertise. They promote not only credentials but also their work to advance clinical care and medical science through prestige-building studies published in medical journals. Research is a prized quality and commodity at medical schools, colleges and universities, and specialized facilities — they’re jammed with Ph. D.s and M.D.s who live by the “publish or perish” mantra. This isn’t necessarily bad if it means that medicine and science advance due to all this energy. But the field also has been flooded with medical and scientific papers — and these more than ever are found to have problems so serious that they’re subject to recalls like junk cars (see graphic).
Meantime, doctors and patients race to keep up with the latest reported developments in drugs and treatments, because no one wants to miss out on something that could change or save lives.
And the ravenous health PR machinery has become a beast clamoring for food. But savvy consumers can protect themselves from the many research clunkers flying around — and the articles on them — that can harm patients and their health and wellbeing.
Pore over the invaluable material on the healthnewsreview.org site (and some other important such resources), and helpful information leaps up on avoiding dubious health and medical stories and the studies on which they’re based. A more comprehensive list, with nifty hyperlinks with detailed information on each, is available by clicking here. But let’s spotlight a few, too:
Words to be wary of: Mental alarms should sound as soon certain terms appear in health or medical articles and studies they’re based on, warns Gary Schwitzer, the founder and publisher of healthnewsreview.org and a journalist who has written in the field for four decades. Be wary of terms like cure, miracle, breakthrough, promising, dramatic, hope, and victim. These can be warnings of hype and claims lacking rigorous, scientific substantiation. The promotion thesaurus also should flag consumers for stories on exciting, ground breaking, or game changing drugs or therapies. What’s the harm in inflated words and descriptions? They can be hurtful to already sick patients, as the site noted recently of a breast cancer treatment that it deemed to be promoted in excessive fashion: Suzanne Hicks, an active member of the National Breast Cancer Coalition, said hyping a treatment for a grabby headline is “simply cruel” to ill patients who are “often willing to do anything to survive.”
Problems to the Nth degree, including heart-tugging people stories: If your neighbor rubs motor oil on his big belly and claims this has led to his losing seven pounds in a day, would you race to get a can of his magic elixir? If your boss’s wife swears she never gets colds because she wears a faux fur wrap from October to April, would you make your spouse don one, too? Medical-scientific studies typically disclose the number of participants studied, the N value for their data set. Be wary of tiny N values. These may be reported in case notes, which doctors publish because they may be helpful, intriguing, or outliers. But these and other low N studies too often get “interpreted” beyond what common sense allows: For example, three patients who fast intermittently see sudden improvement in their diabetes. So, should all diabetics should stop eating several times a week to lose weight and reduce their insulin use? When eight patients get sick each year from bacteria commonly found in cats and dogs, should all pet owners recoil from a rare friendly lick from fido or tabby? When one metastatic breast cancer patient among 332 in a clinical trial goes into remission, should cancer experts across the country drop everything and adopt the therapy that one woman received? With the way the news grinds these days, we all appreciate “good” stories. But one or two instances do not morph into an accepted treatment or medical-science advance. This also is true with, and consumers should be cautious of, “patient anecdote” reporting. Yes, science and data may need humanizing to be more comprehensible. But doctors, specialists, academic medical centers, and hospitals all scour for “perfect” patients, those around whom a full story can be built — though it’s often nothing more than a pitch for business for a specific drug, surgery, or therapy. Does one hospital really do it better than another? Is the treatment safe, effective, affordable, and medically required? Rigorous and right questions don’t always get answered by emotional appeals focused on a scant few patients and their experiences.
Observe closely but conclude rigorously: Five buff guys start working out in your gym, all wearing tight lime T-shirts. They seem to know each other. But they work out separately, setting top fitness standards, as recorded on charts posted near your gym’s elaborate weight and aerobic devices. So, can you conclude that you could be as healthy as these role-models are, if only you, too, wore a green top? Crazy? Great. You’ve mastered a difference that eludes too many patients who read reports on “observational” research, a study type exploding in science and medicine. It is an invaluable approach, helping scientists, for example, conclude that cigarettes cause cancer or that cars could be made safer. But scrupulous researchers use great care in their findings based on observations and not controlled experiments (a.k.a. rigorous clinical trials). As Schwitzer has written: “[A]n observational study cannot prove cause and effect. Statistical association is not proof of cause-and-effect. It is not unimportant. But no one should make it more than what it is.” It is tough to sift through numerous variables that might affect studies, including how factors interact. Cornell’s Wansink somehow made complex studies on kids, for example, seem simple and easy: Plunk a sticker of a popular character like Sesame Street’s Elmo on an apple and youngsters will choose this more healthful option over others presented. But his experiments didn’t hold up, not the least because he claimed to work with 8- to 11-year-olds when his subjects were ages 3 to 5. There’s a big difference in how tots versus older children react to cartoon-based stimuli. But nutrition research, in particular, is tough to do well and it seems too easy to misinterpret, with its observational studies extrapolated in excess into hard, fast, and dubious dicta: Certain foods get deemed evil and bad while others, magically, are good or super. Money — the corrupting influence of interested parties — plays an unfortunate role here.
A solid Rx: Lots of skepticism about role of $$$ in medical research
Cash plays a corrosive role in medical-science research, and it may be harder for many consumers to detect the exact role it plays. Its detriments, though, are clear, as the New York Times noted, for example, of Big Pharma’s meddling payments in consulting and speaking fees, gifts, and other means of meddling.
“Decades of research and real-world examples,” the newspaper editorialized recently, “have shown that such entanglements can distort the practice of medicine in ways big and small. Even little gifts have been found to influence doctors’ habits and their perceptions of a given company’s products. Larger payments have been shown to affect the design of clinical trials and the reporting of trial results, among other things. And such financial entanglements have proved devastating to individual patients — and to society at large. The opioid epidemic, to take one recent example, was partly spread by doctors who were persuaded to ignore warning bells and prescribe these drugs liberally by companies that showered them with gifts and consulting fees.”
Medical journals, which are supposed to vet studies before they publish them, subjecting them, for example, to peer reviews, long have sought declarations of potential conflicts of interest from researchers. These have been toothless demands, as Balsega’s painful lapses for Sloan Kettering demonstrated. That does not mean that the journals, and, indeed, federal regulators could not step up — big time — their conflict-of-interest reporting requirements. They should.
For patients, a byproduct of the go-go federal system has shown up in an increasing reliance in published research — and this gets referred to only in short-hand or vague terms in news coverage — on surrogate measures, markers, or end points. They may be faster and easier to build data on. Cancer drugs, for example, can get the green light from regulators because they show in tests that they may shrink tumors, or delay their growth. That doesn’t mean that patients who take these drugs, with their sky-high prices, considerable side effects, and potential risks, live better or longer. Some diabetes medications now target lower hemoglobin A1c (HbA1c), a measure of average blood sugar levels over the preceding three months. But just because they hit that mark doesn’t mean they’re more beneficial than existing meds, particularly because the new drugs may have harmful heart effects. Patient advocates have criticized the FDA for allowing more surrogate measure study, saying it may benefit drug- and device-makers more than it does patients.
They also can be hard-pressed to delve into published research, and, frankly, many news stories about it, to tease out individual and institutional conflicts of interest, especially possible financial issues with prominent medical practitioners. ProPublica has tried to act as a watchdog in this area, especially with its user-friendly database of payments to doctors from Big Pharma and medical device makers. You can punch in your own practitioners’ names to see if you think the gifts they have declared, as they legally must, compromise your care. The data comes from the federal Centers for Medicare and Medicaid Services, which also offers online search capacities on its site (click here).
Healthnewsreview.org has stressed the importance for consumers in trying hard always to determine who funds any given research to see if or how it might be influenced. But many articles lack that information, and it may be tough to discover, even by going back to medical journals or online publications. As discussed, the conflicts may not be disclosed, as is ethically required. They also may be hidden. Medical historians and patient advocates have dug into the recent past to find evidence on how Big Tobacco, Big Sugar, Big Pharma, and medical device makers also have gamed the research process and system, funding and manipulating purportedly objective studies to promote their products. This practice has only spread, with critics seeing it done, with variations, by the National Football League, the National Hockey League, and others. In recent times, journalists also have dug into the finances of patient-advocacy groups, finding they receive hefty sums from Big Pharma, and, as a result, may be compromised in their most robust representation of the interests of patients with specific diseases or conditions. Drug makers also have created faux advocacy groups to push products, so-called “astro turf” proponents. And, of course, the rise of online outlets has meant that just plain bad, wrong, and ridiculous health and medical information floods the internet, especially with bogus sites and “publications” that mimic bona fide medical journals. Fie on trolls!
I hope you and yours stay so healthy, however, that you have no reason to consider medical science research. Here’s hoping you’re so well that all you’re doing online is looking at cute kittens and puppies and finding delicious new food ideas!
IN THIS ISSUE
Do we smell a rat? Not all published medical studies can pass sniff tests.
A solid Rx: Use lots of skepticism about role of $$$ in medical research
Number of years it typically takes findings from a valid, published study to become part of accepted medical practice.
Figuring out those numbers in studies
Numbers can scare the best of us, but they needn’t be confusing when reading medical-science research or stories about it. Knowing how to deal with some key figures can carry you far and well.
Look for expressions of risk, not just with relative but also absolute numbers, as healthnewsreview.org recommends. A little math shows why. Let’s say a treatment reduces heart attacks in a group of women from 2 per 100 of them to 1 per 100. That’s an expression of absolute numbers, and it might make it clearer, for example, if you were deciding whether to take a cholesterol-reducing statin to cut your absolute risk of a heart attack. Now this same data also can be discussed in percentages, allowing a calculation of relative risk (which is always expressed in percentages). The treatment reduces heart attacks from 2 percent to 1 percent, a change of 50 percent. That figure might be used in a news article to report the drug cut heart attacks by … 50 percent. Sounds significant, though it may be less so in actual numbers, right?
The NNT asks the question: How many people need to get this particular drug/test/treatment for one person to benefit? The lower the number, the better. If the NNT of a treatment is one, that means everyone treated is helped. One person treated equals one person’s life made better. But that’s true only for imminently life-threatening conditions when everyone dies who is not treated: an appendix about to burst or a heart that has stopped beating and needs to be shocked back into rhythm.
For every other medical condition, the NNT is higher than one, sometimes much higher. Screening tests for early detection of cancer may have NNTs in the thousands: one person’s life saved for every few thousand tested. That can be worthwhile, as long as there is little harm inflicted on the thousands tested. But the reason the PSA test for prostate cancer was nixed by the U.S. Preventive Services Task Force is that it had a very high NNT — 5,000 or even higher — and it inflicts a lot of harm in the downstream consequences when a man learns he may have early prostate cancer. For every life that may be saved, dozens of men are killed or maimed by the surgery.
Many drug treatments have NNT numbers that show they’re great in some circumstances, not so great in others. This story is often repeated in American medicine, especially for lucrative drugs that are still patent-protected from generic competitors. A drug gets tested and proven to work for one condition, and then it gets used for many more conditions without good evidence of usefulness.
Finally, it might seem so basic that it shouldn’t need to be mentioned. But too many news articles on medical studies fail to discuss how much a therapy or drug will cost. These can be the most important figures in a piece. With medical prices soaring these days, it’s unacceptable to omit financial information, and for patients not to think hard about this key aspect of their potential care.
Worse than bad science? None at all …
Although medicine may have its struggles with science-based research, for patients a far greater menace lurks in public information channels: Let’s be polite and call it utter humbug.