Prêmio CBMM
Imprimir Republish

Good practices

A keen eye for detecting manipulation

In 2012, in an effort to improve the quality of the scientific articles it publishes, the Journal of Clinical Investigation began requiring that all authors provide the raw data produced by their experiments, such as unedited images from molecular biology tests, known as blots. The aim was to allow the journal’s editors to check for image duplication or manipulation and thus prevent the need to later publish corrections. In 2016, the journal also started to evaluate the robustness of the statistical analysis of each paper it received, again to reduce the publication of errors and distorted results.

In an editorial published in March, editors Corinne Williams, Arturo Casadevall, and Sarah Jackson presented an overview of the process. They evaluated 200 papers received between July 2018 and February 2019 that were highly likely to be accepted for publication. They identified issues in a significant proportion of the manuscripts: 28.5% had statistical inconsistencies; 21% had anomalies in the blots; and 27.5% had problems with other images. The papers were returned to their authors, who offered to correct the errors. According to the editorial, most of the issues were unintentional and of minor importance. Some articles had more than one mistake. Four were flagged for concerns in all three dimensions: statistics, blots, and other images. In at least two manuscripts, the image manipulation did not appear accidental. The authors were asked to explain what had happened, and after failing to provide any satisfactory justification, the papers were rejected. The journal informed the institutions that the authors had been linked to data fabrication or manipulation.

According to the trio of editors, none of the problematic images were caught during the peer-review process by their “excellent reviewers,” and were only detected thanks to the scrutiny of a member of the editorial team “with a keen eye and excellent pattern-recognition skills.” Most authors, they say, would be surprised to learn that there is still no automated tool able to efficiently screen all papers for anomalies or manipulations. “We recognize that our screening methods are not perfect and subject to human error. Thus far, none of the papers included in the tracking period have had issues brought to our attention after publication,” they wrote.