In September, a group of scientific communication companies released a good practices manual to help editors deal with a type of misconduct that can be hard to detect: the alteration or duplication of images in articles and academic papers. The document, available on the OSF repository, is currently receiving suggestions for improvements, with a final version due in December.
The manual provides precise guidelines on three different levels of manipulation. Level 1 refers to photos that have been duplicated or slightly modified, but the changes do not influence the conclusions of the paper. This includes, for example, contrast or color adjustments made to highlight a finding, or the same image being posted twice in the same manuscript to illustrate different experiments. Journal editors are advised to accept a corrected version of the images and close the case if the problem is discovered before the article is published and the authors prove that they did not act in bad faith. All coauthors of the manuscript, however, must be informed of and agree with the rectification. If the altered images are identified after publication and are not malicious, the article should receive a correction.
Level 2 covers significant modifications that are contrary to normally acceptable standards and alter critical characteristics of an image. One example is the inversion or repositioning of bands in a western blot, a test used to identify proteins in molecular biology. In cases like these, the authors’ institution must be notified so that it can investigate potential misconduct and any messages or information exchanged between the authors and the editor must be recorded in the peer review file. If the authors are able to show that it was a mistake and not fraud, editors can accept a corrected version of the image. If the work has already been published, it can be retracted and republished, or simply retracted, if there is no justification for the manipulation.
Level 3 includes articles in which multiple images have been modified using editing software, where an intention to doctor the images is evident and the original data are no longer represented due to selectivity. Unless the researchers can provide an excellent explanation, the manuscript should be rejected and the authors’ institutions informed so that they can be investigated. If the fraud is discovered after publication, an “expression of concern” should be issued for the paper, stating that it may contain errors and is being reviewed and could be retracted.
The manual was written by representatives from publishers Elsevier, Wiley, Springer Nature, Embo Press, and Taylor & Francis and journal collections JAMA, Cell, and the American Chemical Society, coordinated by the scientific publishers association STM, based in the United Kingdom. It offers more detailed advice than the guidelines created in 2018 by the Committee on Publication Ethics (COPE), a UK-based group that discusses issues related to integrity in science. COPE designed a flowchart for editors to follow—a step-by-step guide of what to do when suspicion is aroused. But it does not define separate categories of image manipulation.
Microbiologist Elizabeth Bik, an expert at identifying altered scientific images, believes the new manual is more advanced. According to her, it is not uncommon for institutions to put off investigating complaints about their researchers, neglect to punish them even when falsification is clear, or fail to inform editors about the results of internal investigations. “The recommendations state that journals can act by themselves, even if they disagree with the institutions’ conclusions,” Bik told the journal Nature. “The rules will not prevent science misconduct, but they provide stronger scrutiny both at the submission stage, as well as after publication.”
The guide advises that strong accusations of tampering be investigated even if they are made anonymously, and says editors are responsible for protecting the identity of accusers. It is common for suspicions to be reported on online scientific discussion forums and even on social networks. Editors may respond to these comments at their discretion.
Image alteration and duplication are recurrent problems faced by scientific journal editors. In 2016, Elizabeth Bik manually analyzed over 20,000 biomedical articles and found some form of adulteration in 4% of them (see Pesquisa FAPESP issue nº 245). Detecting this type of fraud still relies largely on the human eye, although software to help is under development.
Misconduct cases relating to doctored images can be complex and do not always involve aberrant changes. One of the main challenges faced by editors today is identifying manuscripts produced by “paper mills”—illegal services that sell scientific papers on demand, often containing falsified data (see Pesquisa FAPESP issue nº 296). A group of researchers recently identified 400 articles with images so similar that they must have come from a common origin—a paper mill. To identify this type of misconduct, the human eye alone is not enough. All of the images in an article need to be analyzed by a computer and compared to databases of images from other papers. “There arevarious ways to accomplish this systematic, universal screening, including algorithmic methods that are now coming online and need to be vetted for effectiveness by comparing them to visual screening,” wrote Mike Rossner, former editor of the Journal of Cell Biology, in the comments section of the draft manual. Rossner is president of Image Data Integrity, an American consultancy that advises research institutions, funding agencies, and scientific journals on image manipulation in biomedical science studies (see Pesquisa FAPESP issue nº 287). “The editors’ working group might consider creating recommendations for this screening process.”Republish