A prominent person’s fall from grace often signals a healthy environment able to identify and address threats. Mark Tessier-Lavigne’s resignation suggests that leaders may now be held more accountable for meeting standards of research integrity that go beyond merely not lying about their work. Ultimately, his resignation may signal — or establish — higher public expectations for research integrity and encourage us to build structures to support them.
By the usual metrics of funding, publications, and recognition, Tessier-Levigne was clearly a leader in his field. But the panel investigating the accusations was tasked with assessing his “approach to correcting issues or errors in the scientific record” and his “management and oversight of his scientific laboratories.” They concluded that he “failed to decisively and forthrightly correct mistakes in the scientific record.” Moreover, they noted that given the “unusual frequency of manipulation and/or substandard scientific practices” in his labs across many years and different locations, “there may have been opportunities to improve laboratory oversight and management.”
To put it simply, he failed to foster a culture of research integrity and model it for his trainees and collaborators by confronting allegations quickly and openly.
Tessier-Levigne’s resignation is an unusual consequence of accusations of research misconduct. The closest example of this kind of consequence for an academic leader may be Terry Magnuson, former vice chancellor for research at the University of North Carolina at Chapel Hill, who resigned in 2022 after admitting to plagiarism in federal grant applications.
However, Magnuson’s actions fit the standard federal policy definition of research misconduct, defined narrowly as encompassing only fabrication, falsification, and plagiarism. When someone is accused of and found to have committed misconduct, possible consequences include employment termination, debarment from grant funding, or even civil liability. When found not to have committed misconduct, they typically return to their previous life.
Thus, one might have expected Tessier-Lavigne to be in the clear with the report’s conclusion that there is no evidence he committed misconduct or clearly knew about misconduct in his labs. Instead, he lost his job for behavior that has up until this point not typically been subject to consequences.
For instance, it seems that there was pressure for researchers in Tessier-Levigne’s lab to perform — but not unusually so. One of Tessier-Lavigne’s former postdocs told STAT, “I would say categorically that I think there was no more pressure in Marc’s lab than a lot of other labs.” Stories of toxic lab cultures, competitive researchers, and intense pressure for results that lead to grant funding and publications are widespread. This does not excuse his failure to address numerous questions about his research over the years, or what some reporting described as his preferential treatment of students who had results. As STAT reported previously, an anonymous former student observed, “When you didn’t please him, you didn’t get any attention.” But there have been consequences for him, and this is the most conspicuous recent example of a high-profile researcher bearing the consequences of failing to prevent such a culture.
Exacerbating these issues of research culture is the challenge of assigning responsibility in multi-author publications. Modern research is more expensive, interdisciplinary, international, and collaborative than it has been historically, with the consequence that the number of authors on publications has proliferated. It is unrealistic to think that one person can adequately oversee all work in a project. But if no individual actually can be completely responsible, isn’t everyone off the hook?
In many research collaborations, not all authors see raw data. That happens for many good reasons — for example they might they lack appropriate training to understand it, or the data include identifying details limiting who may view them.
But for science to work, someone must accept that responsibility. In his resignation letter, Tessier-Levigne endorsed this expectation: “Although I was unaware of these issues, I want to be clear that I take responsibility for the work of my lab members.”
Leaders are the only people in a research project who can create a microclimate that supports rigorous, honest research. This includes: cultivating a research culture in which expectations for scientific rigor and ethical action are clear and supported; being open, transparent, and responsive when problems arise; and otherwise modeling high standards in research. Tessier-Lavigne failed to do this, and if the panel had found otherwise, he might not have needed to resign.
But individuals alone can only do so much. Knowing that humans are fallible, imperfect, and prone to temptation, we should also create and support good practices with institutional, disciplinary, and national structures to foster research integrity.
In some ways, this is a story about how such structures, built in the past decade or so precisely to improve scientific rigor, helped to identify and draw attention to cases like this. For example, PubPeer, where the problems with Tessier-Levigne’s research were initially identified, was created “to improve the quality of scientific research by enabling innovative approaches for community interaction.” Data sleuths have taken it upon themselves to support good science by calling out problematic practices, and the Open Science movement makes it easier to identify problematic data, methods, or conclusions.
But these grassroots efforts are not enough. Even the toppling of a high-profile researcher does little to support structural change, and in fact can misdirect our focus to only individual solutions. For years there have been calls for data auditing at the institutional level, less focus on the metrics that reduce a researcher’s success to dollars or citations, training in good practices of mentoring, and the creation of a federal research integrity agency. These would be excellent steps toward publicly emphasizing the importance of research integrity and assigning responsibility to institutions to do more to support it.
This case emphasizes the importance of both individual and institutional efforts to improve research rigor and reliability. When looking for leaders, we should seek and select not only those with the most research funding or highest citation counts, but also those who know how to foster an ethical research culture, including rapidly and transparently addressing anything that might affect research integrity. At the same time, because we can’t reasonably expect that all researchers will behave optimally, we must consider structural tools to foster research integrity.
https://www.statnews.com/2023/07/21/marc-tessier-lavigne-stanford-president-scientific-misconduct/
The panel found numerous issues, however, with five studies in which Tessier-Lavigne was a major contributor, including evidence of data manipulation in scientific images. While the report concluded that it would not have been reasonable to expect Tessier-Lavigne to catch these errors prior to publication, he failed to promptly correct or retract studies once problems were later flagged. In light of his discussions with the panel, Tessier-Lavigne’s statement and the report indicate that he is now planning to retract three studies and to correct two others.
ReplyDelete“The Scientific Panel has concluded that Dr. Tessier-Lavigne did not personally engage in research misconduct for any of the twelve papers about which allegations have been raised,” the report notes. “However, several of these papers do exhibit manipulation of research data.”
At multiple points throughout his career, the panel added, Tessier-Lavigne “failed to decisively and forthrightly correct mistakes in the scientific record.”
https://www.statnews.com/2023/07/19/marc-tessier-lavigne-stanford-president-resignation/
Science is a team sport. And while the lab leaders, or principal investigators, often get most of the attention for biomedical breakthroughs, they seldom run experiments. That work is left to postdoctoral researchers, graduate students, and others who peer through microscopes, labor away in mouse rooms, and spend countless hours pipetting. Instead, professors play more of a managerial role, supervising, critiquing, and offering big-picture feedback on the work that goes on in their lab.
ReplyDeleteIt’s a longstanding arrangement, but one that leads to a thorny question when research is challenged: Who’s to blame? STAT put that question to Stanford faculty and students while reporting earlier this year on allegations of image manipulation and other misconduct related to research in Tessier-Lavigne’s lab, as well as researchers at other institutions. The answer varied depending on whom you asked.
Graduate students stressed that since faculty have no qualms using work done in their labs to land grants, win prizes, and build prestige, they should ultimately be accountable for problems with the science done under their supervision. Faculty generally agree, at least in principle, but are often quick to say that there are limits to how closely they can scrutinize their lab’s work, and that team science is in part based on trust. It’s an argument Tessier-Lavigne cited in his own statement about his resignation.
It’s also one he said wasn’t good enough.
https://www.statnews.com/2023/07/19/stanford-president-marc-tessier-lavigne-resignation-analysis/