Imprimir Republish

GOOD PRACTICES

Journals and universities form an alliance

A document suggests notifying the institution first, and then the accused, but only if there is strong suspicion of misconduct

Luana GeigerIn May 2017, a group of researchers and managers proposed new guidelines to coordinate the work of scientific journals and institutions in dealing with cases of misconduct. The document includes recommendations that aim to identify the roles universities and editors should play. It drafts a new set of good practices to supplement the rules established in 2012 by the Committee on Publication Ethics (COPE), an international forum of editors that discusses problems related to ethics in research. The proposal was presented for discussion at the 5th World Conference on Research Integrity, held in Amsterdam, the Netherlands, from May 28 to 31, 2017.

With the theme “Cooperation & Liaison between Universities & Editors (CLUE),” the document’s most novel feature is that it introduces the idea of setting up national registers in offices in charge of conducting investigations of suspicion and fraud, falsifications and plagiarism, and contacting perpetrators. The recommendation looks like just a bureaucratic measure, but its purpose is to assist editors in a complex mission, i.e. to determine who should be sought out to shed light on evidence of problems in a paper that has already been published. Until now, the first step journals often take is contacting the actual author of the scientific article. However, this routine has been criticized because it gives ill-intentioned authors an opportunity to obstruct the investigation that their institution will conduct subsequently.

CLUE suggests that journals establish in-house rules that consider alerting institutions before researchers are notified, albeit only in specific situations. British zoologist Elizabeth Wager, COPE president from 2009 to 2012, told the Retraction Watch website that “this would only be in rare cases where the journal had strong suspicions of data fabrication or falsification.” Wager is one of the authors of CLUE, written together with experts such as Zoë Hammatt, director of the Division of Education of the Office of Research Integrity (ORI), which supervises research in the United States Department of Health, and Chris Graf, director of research integrity for publisher Wiley.

The establishment of national registers of research integrity offices and their administrators should help editors hone in on the right person in these extreme cases. According to the document, editors routinely attempt to contact universities informally before they officially report a suspicious article. This system is not universally accepted, however. In the United States, for example, such contacts need to be documented and legal action can be taken against universities if they reveal to third parties that one of their researchers is under internal investigation. One alternative to national registers, according to Wager, would be to require that authors notify the contacts of the person in charge of their institution’s research integrity office when they submit an article for publication. “It wouldn’t even be necessary to mention these contacts in the article because they would only be used if needed,” she suggests.

Another innovative suggestion is to have universities set up in-house offices in charge of promptly answering questions from editors, with the ability to determine whether the results of an article that may be suspicious are reliable. These offices would operate independently from inquiry committees that investigate whether authors are guilty or innocent of misconduct. The idea is to try to settle a case of chronic confusion: even though a journal’s top priority is to determine whether the results of an article are robust or not in order to decide if it should be retracted, many universities and research integrity offices are prepared only to ascertain whether there was misconduct, and doing so requires drawn-out and costly inquiry procedures. The document’s proposed structure seeks to ensure that unintentional research errors or results that are caused by negligence are investigated promptly, regardless of whether or not it is determined later that there was misconduct. “Such a system would assist journals in quickly establishing whether there are problems with published articles and alerting readers,” Wager says.

Luana GeigerCollaboration
CLUE began preparing its recommendations at a European Molecular Biology Organization (EMBO) event. Held in Heidelberg, Germany in July 2016, editors and university officials discussed how to improve collaboration between scientific institutions and journals in cases of misconduct. The recommendations were published in May 2017 in the bioRxiv preprint repository and they resonated immediately. Molecular biologist and scientific journalist Leonid Schneider, who runs the For Better Science blog, stated that establishing an entity to check the reliability of research is no guarantee that the institution’s investigation is impartial. He mentioned the case of German pharmacologist Kathrin Maedler, accused of duplicating images in scientific articles and found innocent in an investigation conducted by Bremen University. The argument was that, even though they were manipulated, the results of the research were correct and were confirmed by other groups. “An institution that has reason to review the quality of a manipulated paper may be biased and limit itself to stating that the results are reliable, and refrain from asking to have the article retracted. This has happened in the past,” Schneider says.

Bioengineer Nikolai Slavov, a professor at Northwestern University in the United States, suggested adding to the CLUE guidelines an idea he defended in 2015 in the journal eLife: that journal editors begin to take into account criticisms of recently published articles written by researchers in online platforms (a type of peer review performed after papers are released) and that they require a public response from authors in less than 30 days when something amiss is found.

The document raises other topics for discussion. For editors, it recommends retaining raw research data and comments made by reviewers on manuscripts for at least 10 years (in the United States, the time required for biomedical articles is currently six years). For universities, the recommendation is to routinely share reports from their investigations of cases of misconduct with journal editors. This is common in institutions in a number of countries, but is not a rule.

Republish