Imprimir Republish

Cover

System designed to detect manipulated images in scientific articles

Artificial intelligence can both generate fake images and help to identify them

DALL-EThe title of this report, “When the evidence is a mirage,” as interpreted by an AI image generatorDALL-E

A consortium of researchers from Brazil, the USA, and Italy is developing a set of computational tools to automatically detect when images in scientific articles have been tampered with or duplicated, a common type of misconduct that today is largely only identifiable by an experienced human eye. In a paper published in the Nature group journal Scientific Reports at the end of October, the team presented a preliminary assessment of SILA (Scientific Image Analysis). The software, designed to support academic journal reviewers and editors, processes articles in PDF format, extracts the images automatically—also making use of any high-resolution copies made available by the authors or publications—and then uses AI algorithms trained to identify manipulations.

See more:

The objective is to identify evidence of “post-processing,” such as indications that parts of an image were cloned or moved, or that it has similarities with previously published images, as well as indicating its origin in graphic format. SILA is based on a cooperative model, utilizing both artificial and human intelligence. Even if the AI highlights something suspicious, the result is only indicative: a human expert then has to verify the evidence and confirm whether tampering has occurred or not. The analysis was based on content collected from 988 retracted scientific articles that involved manipulation or reuse of images. “The tool is based on principles of advanced image processing, forensic techniques, computer vision, and artificial intelligence, providing analyses that help human specialists decide if the potential issues it discovers are legitimate or not,” says one of the authors, Anderson Rocha, a professor and director of the Institute of Computing at the University of Campinas (UNICAMP), where he heads the Artificial Intelligence Laboratory (Recod.ai).

In the assessment described in the paper, the modules that comprise the system performed to varying degrees. In tasks related to classifying content, for example, the tool’s algorithms proved inefficient. In other areas, such as detecting regions of images that demonstrated manipulation and capturing evidence of duplication, it achieved more sensitive results than those obtained by human observation, even generating some false positives. “We are still refining SILA. There is plenty of room for improvement,” explains Rocha.

The next step is to create algorithms that recognize articles produced by paper mills

The algorithms and databases used were made available in open-access repositories—the aim is that other researchers will help test and refine the tool. The consortium is funded by the USA’s Defense Advanced Research Projects Agency (DARPA) and Office of Research Integrity (ORI). In Brazil, it is funded by FAPESP through a thematic project led by Rocha. “We designed the system thinking about the needs of the ORI, who asked us to develop it,” he says. The idea is for the ORI and DARPA to evaluate the toolkit at public institutions and establish partnerships with the publishers of academic journals.

Daniel Moreira, the lead author of the study, did his PhD at UNICAMP under the supervision of Rocha and is a researcher at the University of Notre Dame and Loyola University Chicago. All stages of the study were overseen by Edward Delp, a professor of computer science at Purdue University. Researchers from Italian institutions, such as the University of Naples Federico II and the Polytechnic Institute of Milan, are also part of the consortium. The initiative is part of a broader project seeking to create AI algorithms for detecting “synthetic reality” by identifying manipulated images, audio, and video (see Pesquisa FAPESP issue nº 321).

One of the advantages of SILA is that it encompasses a complete cycle of image-analysis tasks, from extraction to analysis, and the plan is for it to be made open and available to all. In recent times, various technological solutions of this nature have appeared on the market, but they all seek to solve specific tasks and are sold by private companies. One is Image Twin, a program created by a startup from Vienna, Austria, that detects image reuse and duplication in scientific articles. Ten journals edited by the American Association for Cancer Research (AACR) and two journals linked to the American Society for Clinical Investigation (ASCI) have been testing a program developed by Israeli company Rehovot, designed to identify images that have been duplicated or parts of which have been rotated, inverted, or stretched. The results have been promising. “We are very happy with the results so far,” Daniel Evanko, director of operations for the AACR, told the journal Nature.

DALL-ESurrealist-style images created by an AI program based on the prompt: “forged image of a western blot test”DALL-E

Since the advent of photography, images have been used as evidence of scientific results. In some cases, they demonstrate the result directly: a photograph taken by British chemist Rosalind Franklin in 1952 that revealed the structure of DNA was the basis for the resulting article by James Watson and Francis Crick. With the transition from analogue to digital photography, editing software like Photoshop has allowed researchers to easily retouch images, highlighting areas of interest such as the bands in western blots, a method used to identify proteins in molecular biology. “Most of these edits are legitimate. There is nothing wrong with modifying intensity or contrast to make the result easier to visualize,” says Rocha. “But some alterations compromise integrity, and some are clearly intended to mislead the reader. In these cases, we have to amend the article or even ask for it to be retracted.”

These problems are more frequent than often imagined. In 2016, microbioligist Elizabeth Bik manually analyzed over 20,000 biomedical articles and found some form of image manipulation in 4% of them (see Pesquisa FAPESP issue nº 310). It is usually easy to spot major changes, but more sophisticated manipulations can be more difficult to identify. One of the biggest challenges faced by editors is recognizing manuscripts produced by paper mills—illegal services that sell scientific papers on demand, often containing falsified data, including fake images. Bik recently identified 400 articles with images so similar that they must have come from the same origin: a paper mill. In some, there were indications that one standard image had been used for numerous experiments, adapted to the content of each fraudulent document by changing colors or the places of certain elements.

One obstacle to automatically detecting manipulations is the lack of major image databases, which are needed to determine the origin of reused photos, and databases of frequent manipulations, which could be used to determine patterns of fraud in manuscripts submitted for publication. Similar sources are readily available for analyzing the text of scientific articles, providing a solid foundation for antiplagiarism software. Computer engineer João Phillipe Cardenuto, who is being supervised by Rocha in his PhD, published an article in the journal Science and Engineering Ethics in August last year in which he presented image manipulation examples and an algorithm library, available via open access, for reproducing and identifying duplications, edits, and removals. The images available in the library are not real, they are reproductions created to simulate the most common manipulation characteristics. This is because Cardenuto was not allowed to use the fraudulent images obtained from retracted studies due to legal reasons and copyright restrictions.

DALL-E“Manipulated image of a cell, in minimalist style,” as interpreted by DALL-EDALL-E

Now, he is working on identifying the characteristics of manipulated images in manuscripts generated by paper mills and training algorithms to find them in the academic literature. Automated tools for detecting this type of article already exist, such as Papermill Alarm, developed by UK-based academic data services company Clear Skies. The software uses deep learning to analyze article titles and abstracts and compare the language against works known to be produced by illegal services. It does not indicate manuscripts as definitely fabricated, but flags them as deserving of further investigation before publication. “Since the content seems to make sense, there is always doubt as to whether they were really manufactured. If we can add further evidence related to the images, it will be easier to know for sure,” says Cardenuto.

The future is likely to be even more complex. The technological boundaries of synthetic image generation are being pushed further than ever by AI. “It can already be difficult to tell whether a photo contains elements from another, but soon we will have to create new parameters just to confirm fraud in images that are completely fabricated,” says Rocha. There is still no evidence that this is happening in academic articles, but the technology is a palpable reality. Programs like OpenAI’s DALL-E 2 are capable of producing images about any topic based on prompts presented in the form of text. “Artificial intelligence might help detect duplicated data in research, but it can also be used to generate fake data,” Elisabeth Bik wrote in The New York Times in October. “It is easy nowadays to produce fabricated photos or videos of events that never happened, and AI-generated images might have already started to poison the scientific literature. As AI technology develops, it will become significantly more difficult to distinguish fake from real.”

It is not impossible to identify articles whose text was generated by AI programs like ChatGPT. “There is a premise in forensic science that any intervention leaves a trace,” says Anderson Rocha. “An AI algorithm would be able to identify a paper generated by another AI algorithm, but it will likely need to be continually improved upon to keep up with the increasing sophistication of the offending program. It will be like a game of cat and mouse to see who is more advanced: us or the fraudsters.”

Projects
1. Déjà vu: Coherence of the time, space, and characteristics of heterogeneous data for integrity analysis and interpretation (nº 17/12646-3); Grant Mechanism Thematic Project; Principal Investigator Anderson Rocha; Investment R$1,912,168.25.
2. Filtering and provenance analysis (nº 20/02211-2); Grant Mechanism Doctoral (PhD) Fellowship; Beneficiary João Phillipe Cardenuto; Supervisor Anderson Rocha; Investment R$130,935.02.

Republish