Recently, Dr. Peter Kramer published an intriguing, well-written, but poorly reasoned and potentially dangerous "thought piece" in The New York Times. His article, "Why Doctors Need Stories," contains several logical flaws and erroneous arguments, but the overarching concept is a classic "straw man" argument.
He creates a false and highly misleading notion of what "evidence-based medicine" (EBM) is and then proceeds with a screed against EBM in order to extol the virtue of the anecdote. This sort of argument works particularly well when the reader has little or no knowledge of the term being misrepresented, so I expect it's been quite effective even with the generally well-informed Times readership, who wouldn't be expected to know what EBM is.
So let's start with what Kramer says about EBM in his piece. He notes that his preferred approach, "giving weight to the combination of doctors' experience and biological plausibility, stands somewhat in conflict with the principles of evidence-based medicine. The [EBM] movement's manifesto, published in the Journal of the American Medical Association in 1992, proclaimed a new era that would see near-exclusive reliance on systematic clinical research -- the direct assessment of treatments in patients."
Kramer allows himself some wiggle room by saying "somewhat" and adding "near" to "exclusive reliance," but the point is crystal clear: these doctrinaire EBMers, manifesto in hand, are preventing us warm and caring docs from talking with our patients, forcing us into a mindless and soulless practice of cookbook medicine wherein we follow protocols and algorithms and ignore the heartfelt pleas of our patients seeking succor and support. If only doctors were trained to listen to their patients, to understand the power of stories, we'd all be happier and healthier...
Looking at this "manifesto," I found it almost amusing to see what the authors wrote 22 years ago in describing how wrong-headed doctors make false arguments against EBM: "Misinterpretation 1. -- Evidence-based medicine ignores clinical experience and clinical intuition." (Straw man, anyone?)
Now let's look at what EBM really is. As defined nearly 20 years ago by David Sackett, one of the founders of this discipline: "Evidence-based medicine is the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients. The practice of evidence-based medicine means integrating individual clinical expertise with the best available external clinical evidence from systematic research."
Patients' stories -- case histories -- are evidence. All of the EBM "manifestos" acknowledge that, and case reports have always been part of the EBM landscape. (Indeed, my own first publication was a case report.) But there is an important caveat: case studies (and patients' stories) are weaker evidence than carefully designed and conducted (and appropriately interpreted) research studies. There is a hierarchy of evidence, and case reports are at the lowest level of that hierarchy. There are excellent reasons for this, but that's a separate discussion...
A more serious critique from a leading thinker in clinical epidemiology is provided in a frequently downloaded paper from the Public Library of Science. In a provocatively essay titled "Why Most Published Research Findings Are False," John Ioannidis argues that small sample and effect sizes, large numbers of tested relationships, limitations in study designs, and financial and other conflicts of interest all contribute to a greater likelihood of a study's conclusions being false (see Research reproducibility 8/27/15). Given these factors, a positive predictive value greater than 50% is difficult to obtain, he added. This, of course, doesn't suggest that stories have primacy over evidence; rather, that evidence requires thoughtful interpretation to reach reasonable conclusions...
The persuasive power of the piece is enhanced by the image of the lone wolf crying out for justice in an unjust world: "I have long felt isolated in this position, embracing stories." Nice image, but a quick look at the world around him would have shown Dr. Kramer that he is not alone. Indeed, the growth of what has come to be known as "narrative medicine" began in the 1990s, paralleling (and perhaps partly in response to) the development of EBM. Now, in my opinion, the valuing of stories over evidence is highly dangerous, and if you're looking for acolytes rather than scientists, you're more likely to find them preaching narrative medicine than practicing EBM... . But look at me: Kramer has got me mirroring his false dichotomy.
Any good doctor knows that both listening to stories (in the context of clinical experience and good judgment) and applying research studies (judiciously and competently) are required to practice medicine. Caring for patients using one without the other is a fool's errand. (And by the way: my medical school, like most medical schools, puts a lot of effort into teaching communication skills -- including through "narrative medicine" -- to its students.) Blending the two approaches effectively and seamlessly isn't easy, but the goal of medical education -- and doctoring -- is to strike the right balance.
http://www.medpagetoday.com/Blogs/Doctor'sTablet/48593
Ioannidis JPA (2005) Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124
ReplyDeleteAbstract
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
A few weeks ago, I received an email from the Danish psychiatrist Per Bech that had an unexpected attachment: a story about a patient. I have been writing a book about antidepressants — how well they work and how we know. Dr. Bech is an innovator in clinical psychometrics, the science of measuring change in conditions like depression. Generally, he forwards material about statistics.
ReplyDeleteNow he had shared a recently published case vignette. It concerned a man hospitalized at age 30 in 1954 for what today we call severe panic attacks. The treatment, which included “narcoanalysis” (interviewing aided by a “truth serum”), afforded no relief. On discharge, the man turned to alcohol. Later, when sober again, he endured increasing phobias, depression and social isolation.
Four decades later, in 1995, suicidal thoughts brought this anxious man back into the psychiatric system, at age 70. For the first time, he was put on an antidepressant, Zoloft. Six weeks out, both the panic attacks and the depression were gone. He resumed work, entered into a social life and remained well for the next 19 years — until his death.
If the narrative was striking, so was its inclusion in a medical journal. In the past 20 years, clinical vignettes have lost their standing. For a variety of reasons, including a heightened awareness of medical error and a focus on cost cutting, we have entered an era in which a narrow, demanding version of evidence-based medicine prevails. As a writer who likes to tell stories, I’ve been made painfully aware of the shift. The inclusion of a single anecdote in a research overview can lead to a reprimand, for reliance on storytelling...
The contributors write: “Data are important, of course, but numbers sometimes imply an order to what is happening that can be misleading. Stories are better at capturing a different type of ‘big picture. ”
How far should stories inform practice? Faced with an elderly patient who was anxious, withdrawn and never medicated, a well-read doctor might weigh many potential sources of guidance, this vignette among them. Often the knowledge that informs clinical decisions emerges, like a pointillist image, from the coalescence of scattered information.
My recent reading of outcome trials of antidepressants has strengthened my suspicion that the line between research and storytelling can be fuzzy. In psychiatry — and the same is true throughout medicine — randomized trials are rarely large enough to provide guidance on their own. Statisticians amalgamate many studies through a technique called meta-analysis. The first step of the process, deciding which data to include, colors the findings. On occasion, the design of a meta-analysis stacks the deck for or against a treatment. The resulting charts are polemical. Effectively, the numbers are narrative.
Because so little evidence stands on its own, incorporating research results into clinical practice requires discernment. Thoughtful doctors consider data, accompanying narrative, plausibility and, yes, clinical anecdote in their decision making. To put the same matter differently, evidence-based medicine, properly enacted, is judgment-based medicine in which randomized trials, carefully assessed, are given their due.
http://opinionator.blogs.nytimes.com/2014/10/18/why-doctors-need-stories/?_r=0
Smith GC, Pell JP. Parachute use to prevent death and major trauma related to
ReplyDeletegravitational challenge: systematic review of randomised controlled trials. BMJ.
2003 Dec 20;327(7429):1459-61.
Abstract
OBJECTIVES:
To determine whether parachutes are effective in preventing major trauma related to gravitational challenge.
DESIGN:
Systematic review of randomised controlled trials.
DATA SOURCES:
Medline, Web of Science, Embase, and the Cochrane Library databases; appropriate internet sites and citation lists.
STUDY SELECTION:
Studies showing the effects of using a parachute during free fall.
MAIN OUTCOME MEASURE:
Death or major trauma, defined as an injury severity score > 15.
RESULTS:
We were unable to identify any randomised controlled trials of parachute intervention.
CONCLUSIONS:
As with many interventions intended to prevent ill health, the effectiveness of parachutes has not been subjected to rigorous evaluation by using randomised controlled trials. Advocates of evidence based medicine have criticised the adoption of interventions evaluated by using only observational data. We think that everyone might benefit if the most radical protagonists of evidence based medicine organised and participated in a double blind, randomised, placebo controlled, crossover trial of the parachute.