The executive had been very smart to seek more information,
and now, by coming to Brown, he was very lucky, too. Brown is part of the
RightCare Alliance, a collaboration between health-care professionals and
community groups that seeks to counter a trend: increasing medical costs
without increasing patient benefits. As Brown put it, RightCare is “bringing
medicine back into balance, where everybody gets the treatment they need, and
nobody gets the treatment they don’t need.” And the stent procedure was a classic
example of the latter. In 2012, Brown had coauthored a paper that examined
every randomized clinical trial that compared stent implantation with more
conservative forms of treatment, and he found that stents for stable patients
prevent zero heart attacks and extend the lives of patients a grand total of
not at all. In general, Brown says, “nobody that’s not having a heart attack
needs a stent.” (Brown added that stents may improve chest pain in some
patients, albeit fleetingly.) Nonetheless, hundreds of thousands of stable
patients receive stents annually, and one in 50 will suffer a serious
complication or die as a result of the implantation procedure…
When you visit a doctor, you probably assume the treatment
you receive is backed by evidence from medical research. Surely, the drug
you’re prescribed or the surgery you’ll undergo wouldn’t be so common if it
didn’t work, right?
For all the truly wondrous developments of modern
medicine—imaging technologies that enable precision surgery, routine organ
transplants, care that transforms premature infants into perfectly healthy
kids, and remarkable chemotherapy treatments, to name a few—it is distressingly
ordinary for patients to get treatments that research has shown are ineffective
or even dangerous. Sometimes doctors simply haven’t kept up with the science.
Other times doctors know the state of play perfectly well but continue to
deliver these treatments because it’s profitable—or even because they’re
popular and patients demand them. Some procedures are implemented based on
studies that did not prove whether they really worked in the first place.
Others were initially supported by evidence but then were contradicted by
better evidence, and yet these procedures have remained the standards of care
for years, or decades.
Even if a drug you take was studied in thousands of people
and shown truly to save lives, chances are it won’t do that for you. The good
news is, it probably won’t harm you, either. Some of the most widely prescribed
medications do little of anything meaningful, good or bad, for most people who
take them.
In a 2013 study, a dozen doctors from around the country
examined all 363 articles published in The New England Journal of Medicine over
a decade—2001 through 2010—that tested a current clinical practice, from the
use of antibiotics to treat people with persistent Lyme disease symptoms
(didn’t help) to the use of specialized sponges for preventing infections in
patients having colorectal surgery (caused more infections). Their results,
published in the Mayo Clinic Proceedings, found 146 studies that proved or
strongly suggested that a current standard practice either had no benefit at
all or was inferior to the practice it replaced; 138 articles supported the
efficacy of an existing practice, and the remaining 79 were deemed
inconclusive. (There was, naturally, plenty of disagreement with the authors’
conclusions.) Some of the contradicted practices possibly affect millions of
people daily: Intensive medication to keep blood pressure very low in diabetic
patients caused more side effects and was no better at preventing heart attacks
or death than more mild treatments that allowed for a somewhat higher blood
pressure…
A brand new review of 48 separate studies—comprising more
than 13,000 clinicians—looked at how doctors perceive disease-screening tests
and found that they tend to underestimate the potential harms of screening and
overestimate the potential benefits; an editorial inAmerican Family
Physician,co-written by one of the journal’s editors, noted that a “striking
feature” of recent research is how much of it contradicts traditional medical
opinion…
So, while Americans can expect to see more drugs and devices
sped to those who need them, they should also expect the problem of therapies
based on flimsy evidence to accelerate. In a recent Stat op-ed, two Johns
Hopkins University physician-researchers wrote that the new 21st Century Cures
Act will turn the label “FDA approved” into “a shadow of its former self.”...
Steven Galson, a retired rear admiral and former acting
surgeon general under both President George W. Bush and President Barack Obama,
has called the strengthened approval process created in 1962 the FDA’s “biggest
contribution to health.” Before that, he said, “many marketed drugs were
ineffective for their labeled uses.”…
A 2007 Journal of the American Medical Association paper
coauthored by John Ioannidis—a Stanford University medical researcher and
statistician who rose to prominence exposing poor-quality medical science—found
that it took 10 years for large swaths of the medical community to stop
referencing popular practices after their efficacy was unequivocally vanquished
by science.
According to Vinay Prasad, an oncologist and one of the
authors of the Mayo Clinic Proceedings paper, medicine is quick to adopt
practices based on shaky evidence but slow to drop them once they’ve been blown
up by solid proof…
So he[Adam Cifu]and Prasad coauthored a 2015 book, Ending
Medical Reversal, a call to raise the evidence bar for adopting new medical
standards. “We have a culture where we reward discovery; we don’t reward
replication,” Prasad says, referring to the process of retesting initial
scientific findings to make sure they’re valid…
Thanks to such guidelines, the frequency of clearly
inappropriate stent placement declined significantly between 2010 and 2014.
Still, the latest assessment in more than 1,600 hospitals across the country
concluded that about half of all stent placements in stable patients were
either definitely or possibly inappropriate…
Nissen thinks removing financial incentives can also help
change behavior. “I have a dozen or so cardiologists, and they get the exact
same salary whether they put in a stent or don’t,” Nissen says, “and I think
that’s made a difference and kept our rates of unnecessary procedures low.”..
Almost to a person, the cardiologists, including those whose
incomes were not tied to tests and procedures, gave the same answers: They said
that they were aware of the data but would still send the patient for a stent.
The rationalizations in each focus group followed four themes: (1)
Cardiologists recalled stories of people dying suddenly—including the highly
publicized case of jogging guru Jim Fixx—and feared they would regret it if a
patient did not get a stent and then dropped dead. The study authors concluded
that cardiologists were being influenced by the “availability heuristic,” a
term coined by Nobel laureate psychologists Amos Tversky and Daniel Kahneman
for the human instinct to base an important decision on an easily recalled,
dramatic example, even if that example is irrelevant or incredibly rare. (2)
Cardiologists believed that a stent would relieve patient anxiety. (3)
Cardiologists felt they could better defend themselves in a lawsuit if a
patient did get a stent and then died, rather than if they didn’t get a stent
and died. “In California,” one said, “if this person had an event within two
years, the doctor who didn’t [intervene] would be successfully sued.” And there
was one more powerful and ubiquitous reason: (4) Despite the data,
cardiologists couldn’t believe that stents did not help: Stenting just made so
much sense. A patient has chest pain, a doctor sees a blockage, how can opening
the blockage not make a difference?
In the late 1980s, with evidence already mounting that
forcing open blood vessels was less effective and more dangerous than
noninvasive treatments, cardiologist Eric Topol coined the term, “oculostenotic
reflex.” Oculo, from the Latin for “eye,” and stenotic, from the Greek for
“narrow,” as in a narrowed artery. The meaning: If you see a blockage, you’ll
reflexively fix a blockage. Topol described “what appears to be an irresistible
temptation among some invasive cardiologists” to place a stent any time they
see a narrowed artery, evidence from thousands of patients in randomized trials
be damned. Stenting is what scientists call “bio-plausible”—intuition suggests
it should work. It’s just that the human body is a little more Book of Job and
a little less household plumbing: Humans didn’t invent it, it’s really
complicated, and people often have remarkably little insight into cause and
effect.
Chances are, you or someone in your family has taken
medication or undergone a procedure that is bio-plausible but does not work…
A 2004 analysis of clinical trials—including eight
randomized controlled trials comprising more than 24,000 patients—concluded
that atenolol did not reduce heart attacks or deaths compared with using no
treatment whatsoever; patients on atenolol just had better blood-pressure
numbers when they died…
Replication of results in science was a cause-célèbre last
year, due to the growing realization that researchers have been unable to
duplicate a lot of high-profile results. A decade ago, Stanford’s Ioannidis
published a paper warning the scientific community that “Most Published
Research Findings Are False.” (In 2012, he coauthored a paper showing that
pretty much everything in your fridge has been found to both cause and prevent
cancer—except bacon, which apparently only causes cancer.) Ioannidis’s
prescience led his paper to be cited in other scientific articles more than 800
times in 2016 alone. Point being, sensitivity in the scientific community to
replication problems is at an all-time high. So Jacobs and his coauthors were
bemused when the NEJM rejected their paper…
At the same time, patients and even doctors themselves are
sometimes unsure of just how effective common treatments are, or how to
appropriately measure and express such things. Graham Walker, an emergency
physician in San Francisco, co-runs a website staffed by doctor volunteers
called the NNT that helps doctors and patients understand how impactful drugs
are—and often are not. “NNT” is an abbreviation for “number needed to treat,”
as in: How many patients need to be treated with a drug or procedure for one
patient to get the hoped-for benefit? In almost all popular media, the effects
of a drug are reported by relative risk reduction. To use a fictional illness,
for example, say you hear on the radio that a drug reduces your risk of dying
from Hogwart’s disease by 20 percent, which sounds pretty good. Except, that
means if 10 in 1,000 people who get Hogwart’s disease normally die from it, and
every single patient goes on the drug, eight in 1,000 will die from Hogwart’s
disease. So, for every 500 patients who get the drug, one will be spared death
by Hogwart’s disease. Hence, the NNT is 500. That might sound fine, but if the
drug’s “NNH”—“number needed to harm”—is, say, 20 and the unwanted side effect
is severe, then 25 patients suffer serious harm for each one who is saved.
Suddenly, the trade-off looks grim…
But consider the $6.3 billion 21st Century Cures Act, which
recently passed Congress to widespread acclaim. Who can argue with a law
created in part to bolster cancer research? Among others, the heads of the
American Academy of Family Physicians and the American Public Health
Association. They argue against the new law because it will take $3.5 billion
away from public-health efforts in order to fund research on new medical
technology and drugs, including former Vice President Joe Biden’s “cancer
moonshot.” The new law takes money from programs—like vaccination and smoking-cessation
efforts—that are known to prevent disease and moves it to work that might,
eventually, treat disease. The bill will also allow the FDA to approve new uses
for drugs based on observational studies or even “summary-level reviews” of
data submitted by pharmaceutical companies. Prasad has been a particularly
trenchant and public critic, tweeting that “the only people who don’t like the
bill are people who study drug approval, safety, and who aren’t paid by
Pharma.”
Perhaps that’s social-media hyperbole. Medical research is,
by nature, an incremental quest for knowledge; initially exploring avenues that
quickly become dead ends are a feature, not a bug, in the process. Hopefully
the new law will in fact help speed into existence cures that are effective and
long-lived. But one lesson of modern medicine should by now be clear:
Ineffective cures can be long-lived, too.
https://www.theatlantic.com/health/archive/2017/02/when-evidence-says-no-but-doctors-say-yes/517368/
Courtesy of Doximity
See: http://childnervoussystem.blogspot.com/2015/08/number-needed-to-treat.html
http://childnervoussystem.blogspot.com/2015/08/research-reproducibility.html
Courtesy of Doximity
See: http://childnervoussystem.blogspot.com/2015/08/number-needed-to-treat.html
http://childnervoussystem.blogspot.com/2015/08/research-reproducibility.html
Prasad V, Vandross A, Toomey C, Cheung M, Rho J, Quinn S, Chacko SJ, Borkar D, Gall V, Selvaraj S, Ho N, Cifu A. A decade of reversal: an analysis of 146 contradicted medical practices. Mayo Clin Proc. 2013 Aug;88(8):790-8.
ReplyDeleteAbstract
OBJECTIVE:
To identify medical practices that offer no net benefits.
METHODS:
We reviewed all original articles published in 10 years (2001-2010) in one high-impact journal. Articles were classified on the basis of whether they addressed a medical practice, whether they tested a new or existing therapy, and whether results were positive or negative. Articles were then classified as 1 of 4 types: replacement, when a new practice surpasses standard of care; back to the drawing board, when a new practice is no better than current practice; reaffirmation, when an existing practice is found to be better than a lesser standard; and reversal, when an existing practice is found to be no better than a lesser therapy. This study was conducted from August 1, 2011, through October 31, 2012.
RESULTS:
We reviewed 2044 original articles, 1344 of which concerned a medical practice. Of these, 981 articles (73.0%) examined a new medical practice, whereas 363 (27.0%) tested an established practice. A total of 947 studies (70.5%) had positive findings, whereas 397 (29.5%) reached a negative conclusion. A total of 756 articles addressing a medical practice constituted replacement, 165 were back to the drawing board, 146 were medical reversals, 138 were reaffirmations, and 139 were inconclusive. Of the 363 articles testing standard of care, 146 (40.2%) reversed that practice, whereas 138 (38.0%) reaffirmed it.
CONCLUSION:
The reversal of established medical practice is common and occurs across all classes of medical practice. This investigation sheds light on low-value practices and patterns of medical research.
Tatsioni A, Bonitsis NG, Ioannidis JP. Persistence of contradicted claims in the literature. JAMA. 2007 Dec 5;298(21):2517-26.
ReplyDeleteAbstract
CONTEXT:
Some research findings based on observational epidemiology are contradicted by randomized trials, but may nevertheless still be supported in some scientific circles.
OBJECTIVES:
To evaluate the change over time in the content of citations for 2 highly cited epidemiological studies that proposed major cardiovascular benefits associated with vitamin E in 1993; and to understand how these benefits continued being defended in the literature, despite strong contradicting evidence from large randomized clinical trials (RCTs). To examine the generalizability of these findings, we also examined the extent of persistence of supporting citations for the highly cited and contradicted protective effects of beta-carotene on cancer and of estrogen on Alzheimer disease.
DATA SOURCES:
For vitamin E, we sampled articles published in 1997, 2001, and 2005 (before, early, and late after publication of refuting evidence) that referenced the highly cited epidemiological studies and separately sampled articles published in 2005 and referencing the major contradicting RCT (HOPE trial). We also sampled articles published in 2006 that referenced highly cited articles proposing benefits associated with beta-carotene for cancer (published in 1981 and contradicted long ago by RCTs in 1994-1996) and estrogen for Alzheimer disease (published in 1996 and contradicted recently by RCTs in 2004).
DATA EXTRACTION:
The stance of the citing articles was rated as favorable, equivocal, and unfavorable to the intervention. We also recorded the range of counterarguments raised to defend effectiveness against contradicting evidence.
RESULTS:
For the 2 vitamin E epidemiological studies, even in 2005, 50% of citing articles remained favorable. A favorable stance was independently less likely in more recent articles, specifically in articles that also cited the HOPE trial (odds ratio for 2001, 0.05 [95% confidence interval, 0.01-0.19; P < .001] and the odds ratio for 2005, 0.06 [95% confidence interval, 0.02-0.24; P < .001], as compared with 1997), and in general/internal medicine vs specialty journals. Among articles citing the HOPE trial in 2005, 41.4% were unfavorable. In 2006, 62.5% of articles referencing the highly cited article that had proposed beta-carotene and 61.7% of those referencing the highly cited article on estrogen effectiveness were still favorable; 100% and 96%, respectively, of the citations appeared in specialty journals; and citations were significantly less favorable (P = .001 and P = .009, respectively) when the major contradicting trials were also mentioned. Counterarguments defending vitamin E or estrogen included diverse selection and information biases and genuine differences across studies in participants, interventions, cointerventions, and outcomes. Favorable citations to beta-carotene, long after evidence contradicted its effectiveness, did not consider the contradicting evidence.
CONCLUSION:
Claims from highly cited observational studies persist and continue to be supported in the medical literature despite strong contradictory evidence from randomized trials.
John P. A. Ioannidis. Why Most Published Research Findings Are False. Published: PLOS/Med August 30,2005 http://dx.doi.org/10.1371/journal.pmed.0020124
ReplyDeleteAbstract
Summary
There is increasing concern that most current published research findings are false. The probability that a research claim is true may depend on study power and bias, the number of other studies on the same question, and, importantly, the ratio of true to no relationships among the relationships probed in each scientific field. In this framework, a research finding is less likely to be true when the studies conducted in a field are smaller; when effect sizes are smaller; when there is a greater number and lesser preselection of tested relationships; where there is greater flexibility in designs, definitions, outcomes, and analytical modes; when there is greater financial and other interest and prejudice; and when more teams are involved in a scientific field in chase of statistical significance. Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias. In this essay, I discuss the implications of these problems for the conduct and interpretation of research.
It is, of course, hard to get people in any profession to do the right thing when they’re paid to do the wrong thing. But there’s more to this than market perversion. On a recent snowy St. Louis morning, Brown gave a grand-rounds lecture to about 80 doctors at Barnes Jewish Hospital. Early in the talk, he showed results from medical tests on the executive he treated, the one who avoided a stent. He then presented data from thousands of patients in randomized controlled trials of stents versus noninvasive treatments, and it showed that stents yielded no benefit for stable patients. He asked the doctors in the room to raise their hands if they would still send a patient with the same diagnostic findings as the executive for a catheterization, which would almost surely lead to a stent. At least half of the hands in the room went up, some of them sheepishly. Brown expressed surprise at the honesty in the room. “Well,” one of the attendees told him, “we know what we do.” But why?...
ReplyDeleteResearchers writing in Lancet questioned the use of atenolol as a comparison standard for other drugs and added that “stroke was also more frequent with atenolol treatment” compared with other therapies. Still, according to a 2012 study in the Journal of the American Medical Association, more than 33.8 million prescriptions of atenolol were written at a retail cost of more than $260 million. There is some evidence that atenolol might reduce the risk of stroke in young patients, but there is also evidence that it increases the risk of stroke in older patients—and it is older patients who are getting it en masse. According to ProPublica’s Medicare prescription database, in 2014, atenolol was prescribed to more than 2.6 million Medicare beneficiaries, ranking it the 31st most prescribed drug out of 3,362 drugs. One doctor, Chinh Huynh, a family practitioner in Westminster, California, wrote more than 1,100 atenolol prescriptions in 2014 for patients over 65, making him one of the most prolific prescribers in the country. Reached at his office, Huynh said atenolol is “very common for hypertension; it’s not just me.” When asked why he continues to prescribe atenolol so frequently in light of the randomized, controlled trials that showed its ineffectiveness, Huynh said, “I read a lot of medical magazines, but I didn’t see that.” Huynh added that his “patients are doing fine with it” and asked that any relevant journal articles be faxed to him.
https://www.theatlantic.com/health/archive/2017/02/when-evidence-says-no-but-doctors-say-yes/517368/