The Cochrane Library, a collection of databases based in the UK, has released a new set of guidelines to help researchers identify scientific articles containing errors, biases, or evidence of fraud. The aim is to prevent problematic articles from compromising the credibility of the 7,500 reviews of the library’s scientific literature on health topics, designed to provide doctors and nurses with high-quality information.
The specialists in charge of compiling and comparing scientific evidence from different papers and clinical trials now have formal guidelines on what to do if an article included in their review is retracted (depending on the scale of the problem, the review may also be removed and rewritten) or is subject to a published “expression of concern,” a sign that it may contain errors and is being reevaluated. In the latter case, a note must be attached to the review informing the reader of the concern and warning that the review may later be updated.
The task becomes more complex and challenging when there are reasons to doubt the reliability of a result but there is no formal comment or expression of concern about the paper. The recommendation, in this case, is to contact the authors for clarification, using neutral language and without making any accusations of misconduct. The editor of the journal in which the paper was published should also be contacted.
There are other actions that can be taken. One suggestion is to use checklists to determine whether a study was carried out properly. One example is the REAPPRAISED checklist for evaluating publication integrity, which comprises 58 items that are already used by journal editors to assess manuscripts. The name is an acronym for the 11 areas of review: Research governance, Ethics, Authorship, Productivity, Plagiarism, Research conduct, Analyses and methods, Image manipulation, Statistics and data, Errors, and Data duplication and reporting. It is used to analyze formal aspects, such as confirming a study was approved by an ethical committee and that contributorship statements are presented for each author. It also recommends examining the consistency of the presented data. In the case of a clinical trial, for example, is the number of participants plausible within the stated timeframe for recruiting them? Are the results statistically significant? Are there any discrepancies between percentages and absolute values? Is the volume of work involved in the study plausible given the size of the research group?
In an article about the changes, three Cochrane Library editors highlighted the difficulties of making a fair assessment. One is how exactly to define a “problematic” study, which has an implicit level of subjectivity. “For the purposes of the Cochrane policy, we have defined a problematic study as ‘any published or unpublished study where there are serious questions about the trustworthiness of the data or findings, regardless of whether the study has been formally retracted,’ but we know that such terms mean different things to different individuals, denoting a greater or lesser degrees of ‘seriousness,’” wrote Stephanie Boughton and Lisa Bero, editors of scientific integrity at the library, and Jack Wilkinson, editor of statistics.
Another problem is that current assessment strategies have not been thoroughly tested and validated—the editors stress that more studies are needed on their effectiveness at identifying errors and biases. “Use of unvalidated methods risks over‐ or under‐detection of problematic studies. Caution is needed as misclassification of a genuine study as problematic could result in erroneous review conclusions and reputational damage to authors.”
In an opinion piece published in The British Medical Journal (BMJ) in July, Richard Smith, former editor of the journal and professor emeritus at the University of Warwick, UK, commented on the Cochrane Library’s initiative and pointed out a fundamental problem: the existence, in the scientific literature, of reports on clinical trials that were never carried out. He cited a study published by anesthetist John Carlisle of the UK’s National Health Service that analyzed 536 clinical trials submitted for publication in the journal Anesthesia between 2017 and 2020. The results themselves were published in Anesthesia, a prestigious journal linked to a professional association in the UK and Ireland, which wanted to identify flaws in its review process. This may be unsurprising given that in the past, it published a number of fraudulent papers by two researchers who are now among the researchers with the most retracted articles in the world: Yoshitaka Fujii from Japan and Joachim Boldt from Germany (see Pesquisa FAPESP issue no. 272). Carlisle’s study found that 73 trials (14% of the total) contained manipulated data and 43 (8%) were “zombies,” meaning their results were completely fabricated. Most of the fraudulent cases involved trials carried out in Egypt, China, India, Iran, Japan, South Korea, and Turkey.
Without access to the primary data, authors of literature reviews can be misled. In his article, Smith also talked about the case of Ian Roberts, a colleague from the London School of Hygiene and Tropical Medicine, who wrote a literature review on a brain injury treatment that he later discovered was based on trials whose data was probably fabricated.
Smith, who was a member of the Cochrane Library’s oversight committee for many years, sees the new policy as a paradigm shift that in his opinion, should guide the review of clinical trials. “The time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported,” he says.
Republish