Tuesday, August 28, 2018

Tide of lies


Sato's fraud was one of the biggest in scientific history. The impact of his fabricated reports—many of them on how to reduce the risk of bone fractures—rippled far and wide. Meta-analyses that included his trials came to the wrong conclusion; professional societies based medical guidelines on his papers. To follow up on studies they did not know were faked, researchers carried out new trials that enrolled thousands of real patients. Exposing Sato's lies and correcting the literature had been a bruising struggle for Avenell and her colleagues.

Yet they could not understand why Sato faked so many studies, or how he got away with it for so long. They puzzled over the role of his co-authors, some of whom had their names on dozens of his papers. (“Do we honestly believe they knew nothing at all about what was going on?” Avenell asked.) They wondered whether other doctors at his hospital read Sato's work—and whether the Japanese scientific community ever questioned how he managed to publish more than 200 papers, many of them ambitious studies that would have taken most researchers years to complete.

The tools of science that the group had used—analyzing studies, calculating statistics, writing papers—could reveal fraud. But they could not expose the personal and cultural factors that drove it, or assess its emotional toll. So I set off on a quest that would eventually lead me to the Mitate Hospital in Tagawa, a small town on the island of Kyushu, where Sato had worked in the last 13 years of his life…

Avenell's own quest began in 2006, when she was combing through dozens of papers for a review evaluating whether vitamin D reduces the risk of bone fractures. In two papers by Sato, she stumbled on a weird coincidence. They described different trials—one in stroke victims, the other in Parkinson's disease patients—but the control and study groups in both studies had the exact same mean body mass index. Looking further, she quickly found several other anomalies. She decided not to include Sato's studies in her analysis.

She wasn't the first to notice something was off. In a 2005 Neurology paper, Sato claimed that a drug named risedronate reduces the risk of hip fractures in women who have had a stroke by a stunning 86%. In a polite letter to the journal, three researchers from the University of Cambridge in the United Kingdom noted that the study was “potentially of great importance,” but marveled that the authors had managed to recruit 374 patients in just 4 months.

Two years later, a letter in what was then the Archives of Internal Medicine was less polite. A study of male stroke patients published by Sato had managed to enroll 280 patients in just 2 months; another one, of women with Alzheimer's disease, recruited a staggering 500 in an equally short period. Sato claimed to have diagnosed all of the Alzheimer's patients himself and done follow-up assessments of all 780 patients every 4 weeks for 18 months. Both studies had very few dropouts, and both showed risedronate, again, to be a resounding success. “We are deeply concerned whether the data provided by Sato et al are valid,” Jutta Halbekath of Arznei-Telegramm, a Berlin-based bulletin about the drug industry, and her co-authors wrote. Sato apologized in a published response and claimed the study had been conducted at three hospitals, not one. “The authors did not describe this fact, the reason being that these hospitals were reluctant to have their names in the article,” he wrote. He didn't name the other hospitals or explain why they wanted to remain anonymous. The journal apparently accepted the explanation.

The letter's authors also spotted a troubling pattern. In addition to the two papers in the Archives of Internal Medicine, they found 11 further studies by Sato, published elsewhere, that tested whether sunlight, vitamin D, vitamin K, folate, and other drugs could reduce the risk of hip fractures. All but two reported “extremely large effects with significant results,” they noted. But the Archives of Internal Medicine didn't want to point fingers at other journals. “You may allude to your concern that other papers have similar concerns,” its editors warned Halbekath, “but we cannot allow you to mention those other papers by journal name.”

By now, several researchers had raised red flags and waved them for everyone to see—and then everybody moved on. “The trail just went cold,” Avenell says.

Mark Bolland had never heard of Sato when Avenell first mentioned him in late 2012. She and Bolland, a clinical epidemiologist at the University of Auckland in New Zealand, have never met in person, but they joined forces to write meta-analyses on calcium supplements in 2008, together with Andrew Grey and Greg Gamble, both also at the University of Auckland. One topic the quartet discussed frequently was why meta-analyses on the same topic sometimes reach different conclusions. Avenell mentioned Sato's studies and noted that the effects they reported were so strong that they might swing meta-analyses if they were included.

Intrigued, Bolland looked up the papers. He, too, was stunned by the large cohorts, the low number of dropouts, and the big effects of almost any treatment tested. “There is nothing that I can think of that produces a 70% to 80% reduction in hip fractures, yet Sato was able to do it consistently in all his trials,” he says.

To follow up on his suspicions, Bolland turned to statistics. When scientists compare a treatment and a control group, they usually report “baseline characteristics” for each—things like age, weight, and sex, or, in osteoporosis studies, bone density and calcium intake. From these values, scientists can calculate p-values that are a measure of the similarity of two groups for a given characteristic; the closer to one the value is, the more the groups resemble each other. Because the groups are randomly selected, the p-values should normally be “equally distributed”; the value for age or weight is just as likely to be between 0 and 0.1 as between 0.9 and 1.0, for example.

Bolland extracted the baseline characteristics from the 33 clinical trials Sato had published at the time, more than 500 variables all in all, and calculated their p-values. More than half were above 0.8, he found. “That just shouldn't happen,” he says. “The randomized groups were incredibly similar.” There was just one plausible explanation, he says: Sato had fabricated data for both groups and had made them more similar than they would ever be in real life.

The team felt it had a damning indictment. “I thought: ‘This is so convincing. Everybody is going to believe this,’” Avenell says. Still, “It needed detailed statistical refereeing, and it needed to be published by a journal so that other affected journals would take note,” she adds. So they wrote their accusation as a scientific paper. All they had to do was publish it and wait for researchers, journals, and institutions to react, investigate, and retract. Or so they thought.

In March 2013, the team submitted the manuscript to The Journal of the American Medical Association (JAMA), the highest profile journal Sato had published in, and one it felt might have the resources for an in-depth investigation. After reviewing the evidence, JAMA Editor-in-Chief Howard Bauchner told the team the editors would ask Sato and, if necessary, his institution to respond.

Two years later, in April 2015, JAMA told the researchers the hospital had not responded, and it would publish an “expression of concern”—a short note to flag Sato's JAMA paper as suspicious. It would not publish the whistleblowers' paper, however; if the team had concerns about other papers, it should contact the journals that had published them, Bauchner said.

The four researchers were shocked. “To find out after waiting 2 years that in fact nothing much had really happened and, other than an expression of concern, was going to happen in JAMA, was quite frustrating,” Bolland says. (Bauchner declined to answer Science's questions about the case.)…

Next, the paper was rejected by JAMA Internal Medicine, which had also published Sato's work. The Journal of Bone and Mineral Research, a highly rated journal in the osteoporosis field, said it would investigate Sato's papers, but would not publish the manuscript either. The editors of Trials, which had not published Sato's work, said it would not be appropriate to get involved.

Bolland became demoralized. The other three persuaded him not to give up. “If you ever embark on something like this, make sure you have a good support team,” he says now. Avenell, too, was sometimes despondent. Whereas the other three researchers at least saw each other in Auckland, she was on her own, frustrated, in the dreary, gray town of Aberdeen. Sometimes, she says, she would just sit in a corner of her open floor plan office and cry.

Then, in June 2015, came a small success: The Journal of Bone and Mineral Research retracted one of the 33 trials the team had analyzed. A few other journals followed suit in the months after. But some seemed irritated by the group's persistence. “It is apparent that the responses to the JAMA investigation by Dr. Sato and his institution have been either inadequate or not forthcoming,” Grey wrote to Bauchner in December 2015. “At what point will JAMA consider more decisive action, such as retraction?” “We will consider your opinion about how you think it best we should conduct the investigation,” Bauchner responded. “We often hear from people how they think we should perform our responsibilities as editors.”

In what Bolland calls “really just the last throw of the dice,” that same month the group submitted the paper to Neurology, where Sato had published three papers about bone fractures in patients with neurological disease. When it was accepted 8 months later, Avenell cried again. “I'm not one usually given to showing such emotion, especially when all I have is a computer screen and emails to look at,” she says…

By the time Neurology published the investigation in December 2016, 10 of the 33 trials had been retracted, all but one by journals the team had contacted. Three months later, Avenell received an email from an editor with troubling news. Sato was dead…

Today, 21 of Sato's 33 trials have been retracted by the journals or Sato himself; Avenell has crossed them off a list taped next to her computer with a red marker. But now the team is following the ripples that the studies caused, focusing, for the time being, on a dozen papers published in the journals with the highest impact factors. Together, these studies reported results for 3182 participants. They have been referenced more than 1000 times, and 23 systematic reviews or meta-analyses have included one or more of the 12 trials…

The letter does not mention fraud, however. “I couldn't force him to confess,” Ogawa says. “I think he had a mental illness.” His emails were not logical, he says. “To tell the truth, I predicted that he would commit suicide.”

Suicide. Is he sure that's what happened?

“I received the information from the lawyer of Mr. Sato,” Ogawa says. Sato also left a note, he says, and he paraphrases it: “I am very sorry for Mr. Iwamoto. I decided to commit suicide.”

When I call Avenell after my return from Japan and tell her what I have learned, there is stunned silence at first. “That's what we were dreading,” she says. “That's horrible, really horrible.” Exposing the misconduct was important, she says. “Could we have done it without Sato committing suicide? So that he felt less guilty? I just don't know.”

Later she follows up with an email, still astonished at “how such a small piece of data analysis a long time ago can end up with someone dying.” As a clinician and a researcher, Avenell wrote, she knows her work can eventually make the difference between life and death. “But seldom is the connection between a clinician and another human being's death so obvious.”


Courtesy of a colleague

2 comments:

  1. The fraud has also drawn attention to the two co-authors whose names appear on Sato's papers most often. One is Kei Satoh, president of Hirosaki University, in a small town at the northern tip of Japan's main island, Honshu...

    Satoh—whose name, confusingly, is sometimes spelled Sato—did not respond to Science's emails. In a short letter to Grey, Hirosaki University Vice President Chizuko Kohri wrote last November that the university had asked “three outside experts” to investigate after the Neurology paper was published. The committee investigated 38 papers, Kohri wrote. Of these, Sato had already retracted seven and wanted to retract another seven. The committee “concluded that there was research misconduct in these 14 papers,” Kohri wrote, but that Sato alone was responsible. According to Japanese press reports, Satoh maintains that he only corrected the English in the papers. As a sign of contrition, he gave up 10% of his salary for 3 months.

    Sato's most important collaborator, however, was Jun Iwamoto. A board member of the Osteoporosis Society of Japan, Iwamoto was a senior lecturer at Keio University in Tokyo—one of the country's most prestigious—until 2017, when his contract wasn't renewed in the wake of the Sato affair. He and Sato collaborated for more than a decade and published more than 130 papers together, including 25 of the 33 clinical trials.

    A panel at Keio University has been investigating Iwamoto's clinical trials. Iwamoto told the panel that he first contacted Sato in 1998, when Iwamoto was working at the New York University Winthrop Hospital in Mineola. In 2002 they started to put each other's name on every paper they authored. Still, Iwamoto claims he was unaware of Sato's practice. “We talked to Dr. Iwamoto and in most of the papers which Dr. Sato published, which included Dr. Iwamoto's name, Dr. Iwamoto did not know that his name was included,” says cancer researcher Hideyuki Saya, who heads the investigation. The panel was “very shocked” by this, Saya says. At the same time, he says, “For Dr. Iwamoto it was an honor to put his name on Dr. Sato's [papers] even though he did not know much about the content.”

    Although considered highly irregular today, such “gift authorships” were common in the recent past, Saya argues. A 2014 study in the International Journal of Japanese Sociology found they are particularly common in Japan.

    http://science.sciencemag.org/content/361/6403/636

    ReplyDelete
  2. Sato's fraudulent work has propelled him to No. 6 on Retraction Watch's list of researchers who have racked up the most retractions. At the top is Japanese anesthesiologist Yoshitaka Fujii, with 183 retractions; his frequent co-author Yuhji Saitoh, also from Japan, is at 10th place, while Japanese endocrinologist Shigeaki Kato is No. 8. Iwamoto is at No. 9. That means half of the top 10 are Japanese researchers. Yet only about 5% of published research comes from Japan. What explains the number of prolific Japanese fraudsters?

    Michiie Sakamoto, who is leading another investigation at Keio University, into Iwamoto's studies in animals, says it has to do with respect. “In Japan, we don't usually doubt a professor,” he says. “We basically believe people. We think we don't need strict rules to watch them carefully.” As a result, researchers faking their results may be exposed only after they have racked up many publications.

    Outside researchers may also be less likely to question anomalous results from Japan. Several early critics of Sato's work say they thought at first that his unusual results might be due to something uniquely Japanese. One case in point: In 2003, Sato published a study on data from 40 patients with a very rare affliction named neuroleptic malignant syndrome, collected over 3 years. In a letter to the journal, a U.K. neurologist said he and his colleagues “could only recall two such cases in living memory”—but instead of casting doubt on the study, they said it was interesting that the syndrome seemed so prevalent in Japan.

    But none of that explains why Sato decided to embark on his fraud—and nobody seems to be able to shed much light on that question. “Given the number of papers he published, he must have spent a very large amount of time on them,” Bolland says. “I don't understand what his gain was. … There must have been some reason to do it.” The Keio University panel is just as puzzled. “We discussed this a lot in the committee,” Saya says. It might have been like a hobby, he suggests. A thrill. Saya uses the word “otaku,” a Japanese term often applied to people who read manga obsessively.

    http://science.sciencemag.org/content/361/6403/636

    ReplyDelete