Imprimir Republish

Good practices

Risky behavior

Researchers suggest parameters for judging cases of scientific misconduct involving recklessness

General Photographic Agency / Getty Images

An article published in the journal Accountability in Research analyzed the difficulties of punishing recklessness—a type of research misconduct that is cited in American public health service regulations, but tends to result in controversy when specific cases are investigated. Recklessness is defined as the failure by a scientist to prevent misconduct in a laboratory or project under their responsibility, even if they were unaware of or did not participate in the transgressions—by failing to fulfill their obligations, they contributed to the problem.

The challenge lies in determining, unequivocally, whether the accused did everything they could to prevent the misconduct or whether they acted carelessly, in situations ranging from fraudulent manipulation of data that were not properly checked by lead researchers before publication to episodes of plagiarism involving students whose supervisors did not give them adequate scientific integrity training.

There are other obstacles, too. The authors of the article—pediatrician Barbara Bierer of the Center for Bioethics at Harvard Medical School and three lawyers from the American firm Ropes & Gray, which specializes in regulatory issues in sectors such as science and health—found that the concept of recklessness is interpreted heterogeneously by funding agencies. The National Science Foundation, the USA’s leading research funding agency for basic science, classifies researchers as reckless if they “use data or materials with a lack of proper caution and/or showed indifference to the risk that they may be false, fabricated, or plagiarized.” The North American Space Agency (NASA) defines it as misconduct that shows “reckless disregard for accepted practices.” According to the authors of the article, the NASA standard makes the assessment of every case subjective, since it would “compel a decision-maker to identify an accepted practice and then determine, by a preponderance of the evidence, that the falsification, fabrication, or plagiarism constitutes reckless disregard of that particular practice.”

The USA’s Office of Research Integrity (ORI), responsible for overseeing federally funded studies, does not officially define recklessness, but it has addressed the issue several times in misconduct cases it has investigated. One case rich in details was that of American neuroscientist Christian Kreipke, who was fired for misconduct in 2012 by Wayne State University and the Veteran Affairs Medical Center, both in Detroit, USA (see Pesquisa FAPESP issue nº 271). Kreipke included fabricated data in three grant applications and four published articles and posters of which he was listed as an author. After the ORI banned him from receiving federal funding for 10 years, he appealed to the US Courts of Appeal and his case was reviewed by an administrative law judge. In his defense, he argued that he simply used images produced by researchers he trusted and that some of the accusations were fabricated by Wayne State University, which he claimed was persecuting him for political reasons.

Judge Keith Sickendick deemed Kreipke reckless based on two parameters. The first was that he had an obligation to verify the accuracy of the data he used, but he made no attempt to do so. The second was the fact that Kreipke knew that his team was experiencing organizational problems. “Based on his knowledge of the state of his laboratory and personnel situations, it was reckless for Kreipke to simply assume that materials placed in his grant applications, articles, and posters were reliable,” the judge said.

The authors of the Accountability in Research article analyzed this case and others investigated by the ORI and used them as the basis of a proposed new classification system for cases of recklessness. They suggest that two questions be asked. The first is whether the accused verified the accuracy of data collected or provided by subordinates, as is their obligation. The second is whether they took appropriate measures to ensure the integrity of the data and to mitigate the risk of it being falsified, fabricated, or plagiarized. While the first question has an objective answer, the second is subject to interpretation, and the article suggests that investigators analyze six different parameters to determine whether the accused behaved recklessly.

The first factor to consider is whether there exists a perception of an increased risk of misconduct due to the cultural practices at the laboratory where the case occurred or the characteristics of the field in question. The second is whether behavior that could lead to transgressions in the lab environment had previously been repeatedly permitted, tolerated, or ignored. The third is whether there was a failure to use appropriate tools and systems to confirm data accuracy, manage, maintain, and store research records, or to train students and supervisees on integrity practices. The fourth is whether proper attention was paid to practices such as reviewing raw data in meetings or having data validated by multiple lab members before submitting results for publication. The fifth is whether there was a lack of remedial action in response to previous allegations of falsification, fabrication, or plagiarism. And the sixth is whether the organization of the environment was adequate, with frequent meetings, well-established oversight mechanisms, and rigorous procedures for collecting, managing, and sharing data.

Another study published in Accountability in Research, by researchers from the National Institutes of Health and North Carolina State University, reached a different conclusion after examining the case of Frank Sauer, a biochemistry professor at the University of California, Riverside, who was accused of falsifying or fabricating images in seven funding applications and three scientific articles. Sauer initially defended himself by stating that he made mistakes in good faith. Later, he gave other explanations. He claimed the data was falsified by an individual who blamed him for losing his job, and that he had been hacked by a German activist group seeking to sabotage his results.

The investigation took place in 2012. The ORI found Sauer responsible for manipulating, reusing, and falsely labeling images in his epigenetics research and barred him from obtaining federal funding until 2020. The scientist tried to reverse the decision by taking the case to the appeals court. In 2017, administrative law judge Leslie Rogall concluded that even if she were to give Sauer the benefit of the doubt that he may have been a victim of sabotage, the researcher acted recklessly by failing to verify the accuracy of the data he presented in several grant applications. “The repeated publication and submission of applications containing utterly false information shows, at a minimum, indifference to the truth,” said Rogall.

Republish