IN THIS ISSUE
Do we smell a rat? Not all published medical studies can pass sniff tests.
A solid Rx: Use lots of skepticism about role of $$$ in medical research
Figuring out those numbers in studies
Worse than bad science? None at all …
BY THE NUMBERS
2.5 million
Estimated number of scientific papers published globally each year.
~1,000
Retraction Watch estimate of number of papers so flawed they were pulled from public view in 2014 by science and medical journals.
17
Number of years it typically takes findings from a valid, published study to become part of accepted medical practice.
A few sound methods can protect you from rising tide of published medical bunk
Dear Reader,
Medical hype flourishes in the media-saturated modern world, with the internet testing consumer gullibility 24/7. But for a dozen years, an expert collective, based at the University of Minnesota School of Public Health, has battled the rising tide of health-related bunk with the online watchdog site healthnewsreview.org.
At a time, though, when Americans are inundated by a “tsunami of not ready for prime time” health news, based on a growing body of problematic medical-science research and aggressive PR tactics by big medical institutions, fiscal support for healthnewsreview.org is fading. The site soon will cease daily publication of new content, notably its debunking of bad studies and dubious reports about them.
The careful, insightful work by the site and its contributors would be well served if patients stepped up to become more skeptical consumers. This isn’t hard. It can make a difference, improving not only medicine but also the care that we and our loved ones get.
Info graphic credit: Science magazine, based on Retraction Watch database on increasing retractions of published scientific studies
Do we smell a rat? Not all published medical studies can pass sniff tests.
How unblinking can you be about medical research, when, in recent days, reports have cropped up about incidents like these in elite institutions:
-
Harvard Medical School and Brigham and Women’s, one of Boston’s leading hospitals, have been forced to retract 31 studies led by Piero Anversa, a once-celebrated cardiologist. His cardiac stem-cell research sparked a huge but unsupported shift in clinicians’ thinking about heart care. But it turns out, investigators said, the much-disputed works from Anversa’s labs contained false or fabricated data.
-
Memorial Sloan Kettering Cancer Center, one of the nation’s leading oncology centers, has been engulfed by controversies caused by the failure of Dr. José Baselga, the institution’s chief medical officer, to disclose beaucoup bucks he received for flacking cancer drugs. He got millions of dollars in Big Pharma payments while publishing studies on drugs and cancer care in top medical journals. He quit after ProPublica, an investigative web site, disclosed his ethical lapses.
-
Dartmouth University investigated and disciplined for plagiarism Dr. H. Gilbert Welch, described by the New York Times as “one of the country’s most influential researchers in cancer screening” and the risks of its aggressive and excessive use, resulting in over-diagnosis and over-treatment. Welch disagreed with the university’s findings and quit.
-
Cornell University, after much criticism, dug into the work of Brian Wansink, who directed the school’s Food and Brand Lab and became one of its much-quoted nutrition experts. The school was forced to retract six of his studies in one day, 15 in all on eating and behavior, after finding “misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship.” He disagreed with the investigation and quit.
-
The National Institutes of Health, one of the nation’s leading funders of medical-science research, was forced to shut down a $100-million study on alcohol use. The embarrassing collapse occurred after the New York Times reported that Dr. Kenneth J. Mukamal, an associate professor of medicine at Harvard Medical School who served as a key advocate and then lead investigator of the planned research, had worked with Big Alcohol in unacceptable ways in paying for and shaping the research, which critics said leaned from the outset to finding health benefits in moderate drinking.
What’s going on here? It’s simple: Americans spend more than $3 trillion annually on health care-related costs, making medicine a big, lucrative business. Not only do doctors, hospitals, and academic medical centers compete to build patient volumes, they do so by emphasizing their caregivers’ expertise. They promote not only credentials but also their work to advance clinical care and medical science through prestige-building studies published in medical journals. Research is a prized quality and commodity at medical schools, colleges and universities, and specialized facilities — they’re jammed with Ph. D.s and M.D.s who live by the “publish or perish” mantra. This isn’t necessarily bad if it means that medicine and science advance due to all this energy. But the field also has been flooded with medical and scientific papers — and these more than ever are found to have problems so serious that they’re subject to recalls like junk cars (see graphic).
Meantime, doctors and patients race to keep up with the latest reported developments in drugs and treatments, because no one wants to miss out on something that could change or save lives.
And the ravenous health PR machinery has become a beast clamoring for food. But savvy consumers can protect themselves from the many research clunkers flying around — and the articles on them — that can harm patients and their health and wellbeing.
Pore over the invaluable material on the healthnewsreview.org site (and some other important such resources), and helpful information leaps up on avoiding dubious health and medical stories and the studies on which they’re based. A more comprehensive list, with nifty hyperlinks with detailed information on each, is available by clicking here. But let’s spotlight a few, too:
Words to be wary of: Mental alarms should sound as soon certain terms appear in health or medical articles and studies they’re based on, warns Gary Schwitzer, the founder and publisher of healthnewsreview.org and a journalist who has written in the field for four decades. Be wary of terms like cure, miracle, breakthrough, promising, dramatic, hope, and victim. These can be warnings of hype and claims lacking rigorous, scientific substantiation. The promotion thesaurus also should flag consumers for stories on exciting, ground breaking, or game changing drugs or therapies. What’s the harm in inflated words and descriptions? They can be hurtful to already sick patients, as the site noted recently of a breast cancer treatment that it deemed to be promoted in excessive fashion: Suzanne Hicks, an active member of the National Breast Cancer Coalition, said hyping a treatment for a grabby headline is “simply cruel” to ill patients who are “often willing to do anything to survive.”
Problems to the Nth degree, including heart-tugging people stories: If your neighbor rubs motor oil on his big belly and claims this has led to his losing seven pounds in a day, would you race to get a can of his magic elixir? If your boss’s wife swears she never gets colds because she wears a faux fur wrap from October to April, would you make your spouse don one, too? Medical-scientific studies typically disclose the number of participants studied, the N value for their data set. Be wary of tiny N values. These may be reported in case notes, which doctors publish because they may be helpful, intriguing, or outliers. But these and other low N studies too often get “interpreted” beyond what common sense allows: For example, three patients who fast intermittently see sudden improvement in their diabetes. So, should all diabetics should stop eating several times a week to lose weight and reduce their insulin use? When eight patients get sick each year from bacteria commonly found in cats and dogs, should all pet owners recoil from a rare friendly lick from fido or tabby? When one metastatic breast cancer patient among 332 in a clinical trial goes into remission, should cancer experts across the country drop everything and adopt the therapy that one woman received? With the way the news grinds these days, we all appreciate “good” stories. But one or two instances do not morph into an accepted treatment or medical-science advance. This also is true with, and consumers should be cautious of, “patient anecdote” reporting. Yes, science and data may need humanizing to be more comprehensible. But doctors, specialists, academic medical centers, and hospitals all scour for “perfect” patients, those around whom a full story can be built — though it’s often nothing more than a pitch for business for a specific drug, surgery, or therapy. Does one hospital really do it better than another? Is the treatment safe, effective, affordable, and medically required? Rigorous and right questions don’t always get answered by emotional appeals focused on a scant few patients and their experiences.
Smelling a rat — or understanding limits of mouse studies: The stories shot out from hospitals and research centers sound too good to be true. They tell of important studies and their results on memory loss, obesity, vision loss, aging, infections, blood clots, heroin addiction, fertility, and more. But patients need to smell a rat — or, to be more accurate, to be clear about animal-based research and its limits. White lab mice and humans are alike enough that it’s a key step for many drugs, surgeries, and therapies to undergo mouse trials before human testing. But the species also differ so much that not every drug or procedure that seems good in rodents has similar favorable results in us. So, dig into the litter of daily health and medical journalism and know that skepticism is a must with stories on mouse-based studies on olive oil and Alzheimer’s, or hibernation-like sleep and cancer, or antibodies that reduce fat and slim subjects down. By the way, animal tests typically may occur as a drug or treatment marches through the tougher standards of clinical trials, which usually occur in three clear phases. Companies and journalists can jump the gun, reporting on results from early phases of such trials. That’s its own problem because products that start fast can fail before they finish all the needed steps to show their safety and effectiveness. It raises false hopes to talk about incomplete research, and savvy patients should know to avoid this research trap.
Observe closely but conclude rigorously: Five buff guys start working out in your gym, all wearing tight lime T-shirts. They seem to know each other. But they work out separately, setting top fitness standards, as recorded on charts posted near your gym’s elaborate weight and aerobic devices. So, can you conclude that you could be as healthy as these role-models are, if only you, too, wore a green top? Crazy? Great. You’ve mastered a difference that eludes too many patients who read reports on “observational” research, a study type exploding in science and medicine. It is an invaluable approach, helping scientists, for example, conclude that cigarettes cause cancer or that cars could be made safer. But scrupulous researchers use great care in their findings based on observations and not controlled experiments (a.k.a. rigorous clinical trials). As Schwitzer has written: “[A]n observational study cannot prove cause and effect. Statistical association is not proof of cause-and-effect. It is not unimportant. But no one should make it more than what it is.” It is tough to sift through numerous variables that might affect studies, including how factors interact. Cornell’s Wansink somehow made complex studies on kids, for example, seem simple and easy: Plunk a sticker of a popular character like Sesame Street’s Elmo on an apple and youngsters will choose this more healthful option over others presented. But his experiments didn’t hold up, not the least because he claimed to work with 8- to 11-year-olds when his subjects were ages 3 to 5. There’s a big difference in how tots versus older children react to cartoon-based stimuli. But nutrition research, in particular, is tough to do well and it seems too easy to misinterpret, with its observational studies extrapolated in excess into hard, fast, and dubious dicta: Certain foods get deemed evil and bad while others, magically, are good or super. Money — the corrupting influence of interested parties — plays an unfortunate role here.
A solid Rx: Lots of skepticism about role of $$$ in medical research
Cash plays a corrosive role in medical-science research, and it may be harder for many consumers to detect the exact role it plays. Its detriments, though, are clear, as the New York Times noted, for example, of Big Pharma’s meddling payments in consulting and speaking fees, gifts, and other means of meddling.
“Decades of research and real-world examples,” the newspaper editorialized recently, “have shown that such entanglements can distort the practice of medicine in ways big and small. Even little gifts have been found to influence doctors’ habits and their perceptions of a given company’s products. Larger payments have been shown to affect the design of clinical trials and the reporting of trial results, among other things. And such financial entanglements have proved devastating to individual patients — and to society at large. The opioid epidemic, to take one recent example, was partly spread by doctors who were persuaded to ignore warning bells and prescribe these drugs liberally by companies that showered them with gifts and consulting fees.”
Medical journals, which are supposed to vet studies before they publish them, subjecting them, for example, to peer reviews, long have sought declarations of potential conflicts of interest from researchers. These have been toothless demands, as Balsega’s painful lapses for Sloan Kettering demonstrated. That does not mean that the journals, and, indeed, federal regulators could not step up — big time — their conflict-of-interest reporting requirements. They should.
The Food and Drug Administration, in particular, under the Trump Administration has leaned hard toward Big Pharma and medical device makers, acceding to industry demands to expedite product oversight and testing to hurry purportedly beneficial prescription drugs and medical products to market. The agency itself relies more and more on fees from the industry to pay for speedier reviews — itself a big concern to critics.
For patients, a byproduct of the go-go federal system has shown up in an increasing reliance in published research — and this gets referred to only in short-hand or vague terms in news coverage — on surrogate measures, markers, or end points. They may be faster and easier to build data on. Cancer drugs, for example, can get the green light from regulators because they show in tests that they may shrink tumors, or delay their growth. That doesn’t mean that patients who take these drugs, with their sky-high prices, considerable side effects, and potential risks, live better or longer. Some diabetes medications now target lower hemoglobin A1c (HbA1c), a measure of average blood sugar levels over the preceding three months. But just because they hit that mark doesn’t mean they’re more beneficial than existing meds, particularly because the new drugs may have harmful heart effects. Patient advocates have criticized the FDA for allowing more surrogate measure study, saying it may benefit drug- and device-makers more than it does patients.
They also can be hard-pressed to delve into published research, and, frankly, many news stories about it, to tease out individual and institutional conflicts of interest, especially possible financial issues with prominent medical practitioners. ProPublica has tried to act as a watchdog in this area, especially with its user-friendly database of payments to doctors from Big Pharma and medical device makers. You can punch in your own practitioners’ names to see if you think the gifts they have declared, as they legally must, compromise your care. The data comes from the federal Centers for Medicare and Medicaid Services, which also offers online search capacities on its site (click here).
Healthnewsreview.org has stressed the importance for consumers in trying hard always to determine who funds any given research to see if or how it might be influenced. But many articles lack that information, and it may be tough to discover, even by going back to medical journals or online publications. As discussed, the conflicts may not be disclosed, as is ethically required. They also may be hidden. Medical historians and patient advocates have dug into the recent past to find evidence on how Big Tobacco, Big Sugar, Big Pharma, and medical device makers also have gamed the research process and system, funding and manipulating purportedly objective studies to promote their products. This practice has only spread, with critics seeing it done, with variations, by the National Football League, the National Hockey League, and others. In recent times, journalists also have dug into the finances of patient-advocacy groups, finding they receive hefty sums from Big Pharma, and, as a result, may be compromised in their most robust representation of the interests of patients with specific diseases or conditions. Drug makers also have created faux advocacy groups to push products, so-called “astro turf” proponents. And, of course, the rise of online outlets has meant that just plain bad, wrong, and ridiculous health and medical information floods the internet, especially with bogus sites and “publications” that mimic bona fide medical journals. Fie on trolls!
Just to remind: Medical-science research plays a crucial part in shaping clinical care. It can determine the course for drugs and treatments that save and change lives. But it isn’t a central concern for most of us — and it often won’t become one until we see the direct link to our own ill or injured loved ones, friends, or work colleagues. Then we may be tempted to download and print out sheaves of studies and articles, hoping to squeeze in some discussion of their findings in the limited time that doctors typically allot to patients in a routine office visit (roughly 15 minutes or so). To be honest, your doctor may take a deep breath and pause for a moment or two, but then she likely will spend a bit of time with your top concern. So, research well and thoughtfully, perhaps taking a strategy or tactic or two not only from this newsletter but also by diving deeper into healthnewsreview.org and other resources it may lead you to. If you’ve parsed information you’ve found, skeptically and in a smart fashion, you may find that your doctor, who also can be overwhelmed by the struggle to keep up to date, appreciates you as an especially great patient with key information to share politely, and your care benefits accordingly.
I hope you and yours stay so healthy, however, that you have no reason to consider medical science research. Here’s hoping you’re so well that all you’re doing online is looking at cute kittens and puppies and finding delicious new food ideas!
Figuring out those numbers in studies
Numbers can scare the best of us, but they needn’t be confusing when reading medical-science research or stories about it. Knowing how to deal with some key figures can carry you far and well.
Look for expressions of risk, not just with relative but also absolute numbers, as healthnewsreview.org recommends. A little math shows why. Let’s say a treatment reduces heart attacks in a group of women from 2 per 100 of them to 1 per 100. That’s an expression of absolute numbers, and it might make it clearer, for example, if you were deciding whether to take a cholesterol-reducing statin to cut your absolute risk of a heart attack. Now this same data also can be discussed in percentages, allowing a calculation of relative risk (which is always expressed in percentages). The treatment reduces heart attacks from 2 percent to 1 percent, a change of 50 percent. That figure might be used in a news article to report the drug cut heart attacks by … 50 percent. Sounds significant, though it may be less so in actual numbers, right?
If it’s available, look hard for the Number Needed to Treat or NNT, especially instead of discussions of absolute risk reduction.
The NNT asks the question: How many people need to get this particular drug/test/treatment for one person to benefit? The lower the number, the better. If the NNT of a treatment is one, that means everyone treated is helped. One person treated equals one person’s life made better. But that’s true only for imminently life-threatening conditions when everyone dies who is not treated: an appendix about to burst or a heart that has stopped beating and needs to be shocked back into rhythm.
For every other medical condition, the NNT is higher than one, sometimes much higher. Screening tests for early detection of cancer may have NNTs in the thousands: one person’s life saved for every few thousand tested. That can be worthwhile, as long as there is little harm inflicted on the thousands tested. But the reason the PSA test for prostate cancer was nixed by the U.S. Preventive Services Task Force is that it had a very high NNT — 5,000 or even higher — and it inflicts a lot of harm in the downstream consequences when a man learns he may have early prostate cancer. For every life that may be saved, dozens of men are killed or maimed by the surgery.
Many drug treatments have NNT numbers that show they’re great in some circumstances, not so great in others. This story is often repeated in American medicine, especially for lucrative drugs that are still patent-protected from generic competitors. A drug gets tested and proven to work for one condition, and then it gets used for many more conditions without good evidence of usefulness.
More information and lots of NNTs can be found by clicking here to get to a site dedicated to this metric. Consumers also may want to visualize how medical interventions might affect them, and some Maryland researchers have found a nifty way to make the NNT even more useful to them.
Finally, it might seem so basic that it shouldn’t need to be mentioned. But too many news articles on medical studies fail to discuss how much a therapy or drug will cost. These can be the most important figures in a piece. With medical prices soaring these days, it’s unacceptable to omit financial information, and for patients not to think hard about this key aspect of their potential care.
Worse than bad science? None at all …
Although medicine may have its struggles with science-based research, for patients a far greater menace lurks in public information channels: Let’s be polite and call it utter humbug.
To be sure, outright medical quackery has thrived forever, leading patients to suffer from everything from taking pills and rubbing on salves with poisonous mercury to receiving tobacco-smoke enemas after nearly drowning.
But now, in a time of supposed scientific modernity, how can rational consumers still flock to nostrums peddled, free of scientific evidence of their value, by homeopaths and naturopaths?
Do big-name academic medical centers and big hospitals — in their crush to compete for patients —make medicine smarter or just more venal by throwing elbows to get in front of the pack with trendy treatments like those involving stem cells or far-edge cancer therapies? Do they help or harm the public in their controversial and aggressive embrace, again, with great business potential, of unproven “alternative therapies” that lack rigorous, scientific evidence as to their safety and effectiveness?
Meantime, celebrities keep playing outsized roles in health and medical concerns as: Good Samaritans advocating for respected, well-researched causes; exemplars both positive and negative of problems that need addressing; and, sadly, as promoters and profiteers of nonsense.
In my practice, I see not only the harms that patients suffer while seeking medical services but also their struggles to afford and access safe, efficient, and excellent medical care, especially as they confront the huge uncertainty and complexity about varying drugs and treatments as well as information gaps and overloads.
Rigorous, thorough, responsible, safe, and outstanding medical science takes time, and it isn’t always easy and fast. Patients who take the time to understand this and who are able to research and think through what’s best for them and their families deserve great credit and support. Caveat emptor, too: Look hard at anything health- or medical-related that seems too good to be true. It won’t be.
HERE’S TO A HEALTHY 2018!
Sincerely,
Patrick Malone
Patrick Malone & Associates