Imprimir Republish

Good practices

When misconduct goes unpunished

Survey finds that only one in five articles containing errors or suspected misconduct lead to a reaction by the publishing journal

Bettmann / Getty Images

A pair of Spanish researchers has shed new light on the willingness and ability of scientific journals to respond to reports of errors or misconduct in published articles. They found that editorial notices—public statements used by journals to inform readers that an article is under investigation, publish errata, or in serious cases, announce a retraction—were only published for one in every five papers identified as problematic or suspicious by an academic social network.

In a study published in the magazine Information Professional in January, information scientist José-Luiz Ortega and data scientist Lorena Delgado-Quirós analyzed the comments on thousands of articles on PubPeer, a US-based website created in 2012 through which users give their opinions on scientific papers and highlight possible errors and inconsistencies—described as post-publication peer review. Because the articles can be critiqued and reviewed anonymously, the online platform has become a popular channel for reporting misconduct.

The two researchers looked at a sample of 17,244 articles with comments on PubPeer as of 2020. In 14,290 of the papers, equivalent to 82.9%, the comments indicated evidence of fraud, manipulation, methodological failures, or honest mistakes. The analysis did not include reports filed by bots, such as a set made in 2016 by statcheck, a program that detected statistical inconsistencies in more than 50,000 psychology papers on PubPeer and published automatic alerts in the comments of each one of them (see Pesquisa FAPESP issue nº 253).

The next step was to determine whether the complaints led to any consequences by analyzing databases such as PubMed, which stores the abstracts of biomedical articles, and the website Retraction Watch, which monitors retractions. The result was that only 21.5% of the articles in the sample deemed problematic were subject to a response from publishers. The number of editorial notices varied depending on the type of error or misconduct. In relative terms, they were more frequent when the complaint involved fraud, which included plagiarism, ghost authorship, and compromised peer review. Of 1,698 articles that received comments related to these types of misconduct, 499 (29.4%) resulted in some form of response from the journal. In absolute numbers, the most frequent complaint was manipulation or fabrication of data or images. Such reports were made for 10,989 papers—of these, 2,256 (20.5%) resulted in editorial notices. “Journals have to improve their response to problematic articles,” wrote Ortega and Delgado-Quirós, both from the Institute of Advanced Social Studies in Córdoba, linked to the Spanish National Research Council.

The authors acknowledge that the study has a significant limitation: complaints submitted to PubPeer are not always evidence-based and a difficult-to-measure number of them can be unfounded and unfair. It is possible that many accusations were investigated and then dropped. Since journals do not openly record or announce this type of investigation, it is impossible to know how many times this occurred.

Some titles, however, are much more rigorous than others when it comes to transparently investigating errors and misconduct, which seems to demonstrate shortcomings in the self-correction of the scientific record. The study presented data on the 10 journals with the most problematic articles in the sample under analysis. The Journal of Biological Chemistry, which has been published by the American Society for Biochemistry and Molecular Biology since 1925, topped the table, with 5.3% of the sample’s problematic papers. Next was PLOS ONE, with 3.7%. But both also published editorial notices and corrections more frequently than any others, responding to 38.2% and 36.3% of cases respectively. At PLOS ONE, a team dedicated to investigating ethical problems was formed in 2018 after a wave of accusations that images were manipulated in papers published between 2014 and 2016 (see Pesquisa FAPESP issue nº 319).

At the opposite extreme, journals such as Oncotarget and Oncogene reacted least often, issuing warnings for just 13% and 14.3% of reported articles respectively. The fields with the most suspicious articles were the life sciences (56.6%) and health sciences (19.6%). Multidisciplinary journals (28.6%) published more editorial notices than life sciences (22.5%) and social sciences and humanities (21%). “This suggests that multidisciplinary journals have more control over problematic publications,” the authors wrote.

The response rate was higher for papers published more recently—editorial notices were issued for 34% of suspicious articles published in 2019. There are signs, according to the authors, that efforts have been improving over time, albeit slowly. Another curious finding was that prestigious journals (those with higher citation rates) receive more accusations of image manipulation, while papers in low-impact journals are more likely to be accused of plagiarism.

The idea that journals are not always able to self-correct is not new and has been addressed in other studies. In some, the number of problematic articles that escaped retractions was low, attributed to the fact that the cases involved formal complaints filed with investigative bodies. A 2007 article in the journal Science and Engineering Ethics analyzed misconduct in biomedical articles described in the annual reports of the US biomedical research funding agency the National Institutes of Health (NIH) and the country’s Office of Research Integrity (ORI), finding that 83% of the papers cited as flawed were retracted. In other studies, the number of suspect articles not retracted was much higher. In 2016, microbiologist Elisabeth Bik manually inspected western blot images—a method used in molecular biology to identify proteins—in 20,621 biomedical studies published in 40 journals between 1995 and 2014 (see Pesquisa FAPESP issue nº 245). She found altered images in 782 papers (3.8% of the total). Bik noted in many cases modifications made in Photoshop seemed intentional and fraudulent in nature. She informed the journal editors and wrote to 10 institutions that employed scientists responsible for recurrent problems. The result of the initiative, as she revealed at the time, was limited: just six articles were retracted and 60 corrected.

Republish