Thed van Leeuwen, a researcher at the Centre for Science and Technology Studies at the University of Leiden, Netherlands, published a paper in September that gives a comprehensive overview of scientific articles retracted due to errors, bias, fraud, or plagiarism. Van Leeuwen analyzed 3,729 retracted papers originally published in journals indexed on the Web of Science (WoS) database. Among the favorable evidence, the survey found that the number of problematic papers is small—equivalent to 0.008% of all WoS articles—and the number of retractions is at a lower level than last decade. In 2015, less than 250 articles were retracted, about 100 less than the annual average from 2005 to 2010. The study also showed that the retraction process works quickly against questionable articles: 90% of suspect papers end up being retracted within six years of publication.
Retractions were more frequent in fields with strong international competition and rapid publication mechanisms, such as clinical medicine (27% of the total), physics and materials sciences (20%), chemistry (15%), and life sciences (12%). They are also more common in countries that reward authors for publishing in journals of international impact. The United States leads, with 26% of retractions. Next are China (10%), Germany and the United Kingdom (7% each), Japan (6%), and France (5%).
In more than half of the cases, the cause is some form of misconduct, such as plagiarism and self-plagiarism (12%), duplicate publications (10%), data falsification and fabrication (10%), and irregularities in the review process or authorship problems (3% each), among others. Only 10% of retractions were made due to errors. In 24% of cases, it was not possible to identify why the paper was retracted—the retraction notice did not contain any explanation. For Van Leeuwen, many journals and authors still see retractions as taboo, leading them to avoid stating the reasons. Another challenge faced by the Dutch researcher was understanding the reasons for retractions when they were requested by the authors themselves. According to him, the use of vague and ambiguous language sometimes made it impossible to infer whether it was a case of fraud or error. The study was published at a seminar hosted in the Netherlands by the Committee on Publication Ethics, an international forum for editors of scientific journals to discuss topics related to scientific integrity.Republish