Imprimir Republish

Good practices

Copy, paste, and embarrass yourself

Website lists almost a hundred scientific articles that contain phrases copied from ChatGPT queries

Keystone / FPG / Hulton Archive / Getty Images

The website Retraction Watch, which keeps a database of thousands of scientific articles retracted due to errors or misconduct, has begun compiling a list of papers published in dozens of academic journals that were produced with the help of generative AI and whose authors did not openly declare that they used such software, as required by the journals. The list is updated whenever new questionable papers emerge and featured nearly 100 articles in mid-June. The papers demonstrate gross negligence by the authors—whose use of the tool was exposed by the fact that they simply copied and pasted responses from ChatGPT—and by the reviewers and editors responsible for assessing manuscript quality and robustness, suggesting improvements, and recommending or rejecting them for publication.

One case that gained notoriety on social media was an article on the performance of lithium batteries, published by Chinese scientists in the Elsevier journal Surfaces and Interfaces in March. In the first line of the introduction, there is a typical phrase from the standardized language used by ChatGPT when it interacts with users: “Certainly, here is a possible introduction for your topic.” The anomaly led to the entire paper being analyzed in depth and several other problems were soon discovered, such as images copied from another paper by the same group. The article was then retracted.

ChatGPT’s repertoire of stock phrases is varied and has also been missed by other authors. A 2022 article on sustainable urban design in slums included a justification given by the AI program for not analyzing recently published literature. “As an AI language model, I don’t have real-time access to the internet or the ability to browse recent studies,” noted the paper, written by a Tunisian researcher and published in International Journal of Advances Engineering and Civil Research, a title published by an Egyptian engineering institute.

The same journal also published an article by an Algerian researcher on the use of the Internet of Things in civil engineering, which was found to have used AI. In the middle of the paper was a warning, “knowledge limited to before September 2021,” which is usually issued by ChatGPT to inform users of restrictions in its data. Another variation of the warning is the phrase “The last time my knowledge base was updated was in 2023,” which appeared in a literature review on graphene applications in the oil & gas industry, published by two Kuwaiti researchers in the Elsevier journal Geoenergy Science and Engineering in 2023.

Almost two-thirds of the papers on the list made the same mistake, reproducing two words that make no sense in the context of the articles but personify generative AI: “regenerate response.” The phrase appears on a clickable button that appears alongside every ChatGPT response. The expression was included in an article on the acceptance of distance learning among undergraduate computer science students, published in the Journal of Researcher and Lecturer of Engineering in 2023 by researchers from Indonesia and Malaysia. The case led to no consequences. The two words disappeared from the text on the journal’s website in a process known in editorial circles as stealth revision—when content is corrected secretively without publishing a correction.

The same thing occurred in the journal PLOS ONE, but the outcome was very different. An article on the effects of hybrid learning (in-person and online) on the motivation of Pakistani students also included ChatGPT’s “regenerate response” phrase and was later retracted by the journal. The integrity team at PLOS ONE carried out an in-depth analysis of the article. Written by Pakistani researchers affiliated with Chinese universities, the paper also had problems with its bibliographical references—it was impossible to verify the existence or content of 18 of them, likely because AI models sometimes invent references. Another problem was that documents certifying that the experiments, conducted in Pakistan, had received ethical approval from the Pakistani authorities were dated after the participants were recruited, a sign that the rules were not followed. The authors only acknowledged having used the Grammarly platform, which uses AI to perform a language check.

These articles were only ever published, however, because of negligence on the part of reviewers and journal editors when it came to evaluating the manuscripts. A recent case demonstrated that even journals committed to scientific integrity do not deal with this problem uniformly. Last year, Jacqueline Ewart, professor of communication at Griffith University in Queensland, Australia, was asked by the Journal of Radio and Audio Media to review an article about community radio stations. She recommended that the manuscript be rejected, believing that it had been written with the help of AI. Ewart was unable to prove the existence of several of the bibliographical references and was certain that at least one had been invented, because it cited her as the author of a study that she had never written.

In April, she was surprised to see the paper had been published in another journal, the World of Media, published by Moscow State University in Russia. She told Retraction Watch that the authors had made a single change from the version she reviewed—swapping the word “progression” for “development” in the title—even though they had been warned about the problem with the references. Ewart informed the editors of World of Media, who opened an investigation. One of the authors, Amit Verma of Manipal University, India, claimed that he only used the AI tools to review the English and that they obtained the unverifiable references from Indian institutional repositories in which the indexing is known to be poor—he did not attempt to explain the origin of the cited article that Ewart never wrote. The paper is no longer available on the World of Media’s website, replaced with a warning that it is under investigation. Verma told Retraction Watch that the journal had promised to republish a corrected version.

Republish