We always advise people to understand the risks as well as the benefits of any procedure or treatment they are considering, but getting your arms around the idea of “risk” – much less being able to quantify it – is a challenge all its own. Last month, “Risk and Reason,” a multipart series on NPR, looked at several ways people assess risk, with an eye toward helping medical consumers apply it to their own circumstances.
One part tells the story of Brian Zikmund-Fisher, a professor at the University of Michigan School of Public Health who teaches about risk and probability. As a graduate student, Zikmund-Fisher was studying behavioral decision theory when he was forced to become his own lab rat.
Diagnosed with myelodysplastic syndrome, a disorder that inhibits the production of blood cells and makes victims vulnerable to bleeding and infection, Zikmund-Fisher was told that if he didn’t have treatment he would have about 10 years to live. The treatment was a bone marrow transplant, which he was told would have a 70% chance of cure.
But bone marrow transplants involve chemotherapy, which leave patients susceptible to infection because it impairs immune function. Zikmund-Fisher was told the transplant/chemotherapy treatment had a 25% to 30% chance of killing him within six months to a year.
As NPR summarized his situation, “As it turns out, making decisions based on the odds can be an extremely difficult thing to do, even for people who study the science of how we make decisions.”
Ultimately, Zikmund-Fisher made his decision based on factors even he couldn’t quantify. He chose the transplant, and had a positive outcome. But he observed that discussing probabilities with his medical team was useful, but limited in being able to predict what will happen to any one person.
But that’s not a productive way for doctors to think; Zikmund-Fisher says doctors should think about overall numbers, not individual cases.
“A doctor doesn’t see one patient. They see hundreds of patients – thousands of patients – over their career,” he told NPR. “We want doctors to make choices that give all of their patients the best possible outcomes regardless of whether that particular choice turned out well in the last time they tried it, or turned out poorly,” he says. “We want doctors to take the long view, to give us the best chances of success, knowing that sometimes it’s going to work well, and sometimes it’s not.”
Another part grapples with how people understand risk through numbers versus text. A survey of primary care doctors about how they discuss risk with their patients showed that 1 in 5 were very comfortable using numbers and explaining probabilities to patients. Most prefer words or terms, such as “‘very small risk,” “very unlikely,” “very rare,” “very likely” or “high risk.”
But such words can be unclear to a patient, or interpreted different from how the doctor would interpret them. “People may hear ‘small risk,’ and what they hear is very different from what I’ve got in my mind,” one doctor told NPR. “Or what’s a very small risk to me, it’s a very big deal to you if it’s happened to a family member.”
Some patients better understand how taking a certain medicine to address their problem might affect them through a combination of charts, calculators and words. One patient considering taking a statin for his heart disease declined when he was shown a decision aid calculator developed by the Mayo Clinic using colored dots on a grid, each symbolizing a person.
When a patient’s individual information is input, some of the green dots turn yellow, indicating in the next 10 years how many people with that profile are expected to have a heart attack. When this patient’s profile was entered, 12 dots turned yellow and 88 remained green, indicating that 12 in 100 men like him would have a heart attack within 10 years. “It looks like my chances are slim,” he commented.
Others need a more experience-based discussion to visualize their risks.
One orthopedic patient was considering a hip replacement. He was 59, and physically very active. His doctor assessed his risk of infection from such a surgery to be less than 1 in 100. But that wasn’t his concern – his overarching calculation was how the procedure was likely to positively affect his quality of life, which, for him, depended on being able to pursue the activities that were starting to give him trouble.
He had read a booklet and watched a DVD about hip replacement, and had talked to people he knew who were very happy with their outcome after having one. That was more compelling information for him.
The FDA, the story notes, requests that drug companies use numerical values to indicate risk, and to avoid vague terms such as “rare,” “infrequent” and “frequent” to describe the chances of side effects. But the European Union’s Medicines Agency matches terms such as “very common,” “common,” “uncommon,” “rare” and very rare” with numerical values for each level of frequency. By that measure, a “very common” side effect occurs in more than 1 in 10 cases. A “very rare” side effect occurs in fewer than 1 in 10,000 cases.
Such explanations speak to both kinds of patients.
Another part looks at the work of Drs. Steven Woloshin and Lisa Schwartz, who are working on how best to communicate to doctors and patients the uncertainty of assessing benefits and risks of pharmaceuticals. They want the FDA’s drug information to be more useful and more readable.
That agency approves not only drugs, but the prescribing instructions that accompany them and the patient information material. We’ve all gotten drugs, both prescription and over the counter, with wads of paper stuffed into the packaging that is about as inviting to read as the fine print on a credit card application.
Woloshin and Schwartz have designed a drug facts box that simplifies all that fine-print stuff to, essentially, explain how a drug compares to a placebo, or sugar pill.
That’s in contrast to what usually happens, Schwartz says. “The prescribing info is written by industry, and then negotiated with FDA, and then FDA ultimately approves it. And we have documented examples where important info – like how well the drug works – is not in the label.”
They showed people ads for two competing heartburn drugs, one plainly more effective than the other. They also showed people two of their drug facts boxes, one for each of the heartburn drugs, showing how each fared against a placebo in testing.
“When the people are presented with the standard information they see – like a drug ad – about 30% percent of people chose the better drug,” Woloshin told NPR. “But when we showed them information in the drug facts box form, 68% of people were able to choose the objectively better drug. So that’s a really dramatic improvement. It just shows you that if you show people information in a way that’s understandable, they can use it, and it can improve their decision.”
Then they used a real drug, the insomnia medicine Lunesta, and made a drug facts box with FDA data. Two columns compare people with insomnia who took Lunesta with people who, unknowingly, took a sugar pill.
The people who used Lunesta took 30 minutes to fall asleep. The sugar-pill users took 45 minutes. The Lunesta users stayed asleep 37 minutes longer than the others.
Insomnia drugs have very real risks, such as mental fuzziness. (See our blog, “FDA Cuts Lunesta Dose in Half.”) So seeing those comparisons would be enormously helpful for deciding whether to take one or not.
“That’s the whole point of the drugs facts box,” Woloshin said, “to let people look at the evidence and come to their own judgments. But you can’t make those judgments without the facts.”
He and Schwartz believe that patients can handle numbers, including percents, but that too often the information supplied is incomplete or misleading. And when you see the small difference between Lunesta and a fake pill, you see why drug companies use information to confound instead or explain.