Prof Oster is an economist who read a lot about pregnancy. That information, taken together with her expertise in statistics, transformed her into a (very) informed patient. She is not a medical doctor, but she acquired much of the same information medical doctors had. That became a problem. And that it was a problem is the issue we should discuss.
The pregnant professor thought a medical decision would work like any economic decision. First, there would be an actual choice—not a decree. Then, accurate data would frame the decision and multiple paths would be presented, each with their pluses and minuses. In the end, her preferences would guide her choice.
Was she wrong.
Read the following quote. Don't get stuck on the word amniocentesis. Substitute mammogram, cardiac cath, ablation, or stent, for instance. (The italics are mine.)
Take something like amniocentesis. I thought my doctor would start by outlining a framework for making this decision—pluses and minuses. She'd tell me the plus of this test is you can get a lot of information about the baby; the minus is that there is a risk of miscarriage. She'd give me the data I needed. She'd tell me how much extra information I'd get, and she'd tell me the exact risk of miscarriage. She'd then sit back, Jesse [Oster's husband] and I would discuss it, and we'd come to a decision that worked for us. This is not what it was like at all.In the paragraphs that followed, the expert in decision making went on to describe the state of healthcare decisions as they exist today. Four themes stood out:
Misinformation: Medicine overflows with arbitrary rules. Rules become dogma without any basis in evidence. (Think low-fat diets.) Oster writes that, in the worst case, the advice of doctors runs counter to the evidence. In other cases, advice is born from shoddy evidence.
Misthink about risk: Oster laments the failure to consider risk on a continuum. It's easier to make risk a yes-or-no deal (dichotomize). The risk of sudden cardiac death, for instance, does not disappear when the heart's ejection fraction goes from 35% (the cutoff for an ICD) to 36%. Continuous risk is harder to explain, but failing to do so tilts the discussion toward certainty—which is an illusion.
Failure to consider patient preferences: Oster discusses how equally educated people feel differently about risk. Consider the decision to take a clot-preventing drug in a patient with atrial fibrillation. For some, a small decrease in stroke risk is not worth taking a drug that increases the risk of bleeding. They fear bleeding more than stroke; they see themselves in the majority of those who will get no benefit from the preventive drug. Others fear stroke; for them, it's worth taking the drug. The point is that when a doctor says, "You need to take this pill or you will have a stroke," it's the doctor's preference that stands out.
Imbalance of power: Power tilts the doctor's way because we are the presumed experts. Oster raises an issue I have gradually come to know: When you educate yourself about the methodology of science and look at medical studies with an eye for critical appraisal, it's less clear we doctors know as much as we think we know. It's sobering to think how little of what I decreed to patients early in my career would have stood up to critical appraisal...
I see four basic concepts that if doctors embraced would increase the odds of getting to a good medical decision. None were emphasized in my medical training. I learned them through practical experience.
Know the evidence: Doctors should know the evidence—not the translation of the evidence from key opinion leaders or pharma reps, the actual evidence. If we did this, it would be less likely that we would conflate the benefits of acute-care medicine (the easy kind) to those with chronic disease. Think cardiac stents. We would also be less apt to waste resources on low-value care. Medical literature and its media coverage is expanding; critical appraisal has never been more important.
Embrace uncertainty: Doctors train a long time. This brings knowledge, specific skills, and experience. Yet we cannot know the future. We have the history of medical reversals to keep us clear-minded. The best doctors I know say they don't know when they don't know—which is often. They treat the medical decision as the gamble that it is.
See the person: The third concept is the notion of seeing our patients—not their diseases. In training, we were taught to put diseases in silos—cardiac, pulmonary, orthopedic, etc. The problem with that view is it leads to framing errors. A man who can't remember what he had for lunch or how he got to your office has bigger problems than his aortic valve or ejection fraction. A person without a ride to the doctor's office will have trouble taking warfarin.
Control is an illusion: A seminal moment in my career occurred when a wise old doctor sat me down in the doctor's lounge and explained how little we controlled. "John," he said, "in the ICU, I treat two patients with the same problem in the same way. One dies and one lives. We do our best with what we know at the moment, but outcomes are mostly out of our control."
http://www.medscape.com/viewarticle/849689?nlid=87445_3001&src=wnl_edit_medp_neur&uac=60196BR&spon=26&impID=804204&faf=1#vp_2
I used to think Medicine would get easier over time. It makes sense, right? You see patterns, you learn how treatments work, and you just get to know stuff. Experience should make it easier to diagnose and treat.
ReplyDeleteThat’s not been the case for me. In fact, it’s closer to the opposite. In the exam room, as I look up to the patient from my stool, and before I stand at the white board to explain, I often find myself pausing for a moment to think: Is this really the right course? Does the evidence support doing it this way? Do I know the science, or is it “just the way things are done?” I have the same problem in the hospital—perhaps worse, as there, dogma permeates most of what we do.
What keeps popping into my head is the hubris of Medicine. As I grow older, the excessive pride and confidence of the medical establishment becomes more obvious. Why didn’t I see it before?
In many cases, medical and surgical treatments that were once thought to be beneficial turn out to be not so. Often, these therapies were backed by expert guidelines and taught to young students as law. Think of that for a moment. We do things to people; we monitor, we medicate, and we even cut, all with the aim of helping. But then further study proves that we were actually providing no benefit and in some cases, causing harm.
This is sobering.
The most important study in decades:
A recent article (see citation below), published in the journal Mayo Clinic Proceedings, provides chilling evidence that many well-established medical practices are wrong. Researchers from the National Institutes of Health looked at 10 years of clinical investigations from the New England Journal of Medicine. Over the past decade (2000-2010), they found 363 published studies that evaluated an established therapy.
In 146 of the 363 studies (40%), the scientific evidence caused a reversal of established medical practice. That’s a sterile way of saying that nearly half the time the prevailing wisdom was wrong. It is worth going over some examples. Not one branch of Medicine was spared a reversal.
http://www.drjohnm.org/2013/07/changing-the-culture-of-american-medicine-start-by-removing-hubris/
Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Chacko SJ, Borkar D, Gall V, Selvaraj S, Ho N, Cifu A. A decade of reversal: an analysis of 146
contradicted medical practices. Mayo Clin Proc. 2013 Aug;88(8):790-8.