Imprimir Republish

Good practices

Integrity in the midst of a public health emergency

Study examines recommendations for ensuring the quality of peer reviews during the pandemic

Hudiemm/Getty Images

A study published in the journal Nature Human Behavior by researchers from Spain, Denmark, and Canada showed that scientific journals took an average of only six days to review and accept articles on COVID-19 in the first 12 weeks of the pandemic, part of an unprecedented effort to quickly publish results that could help mitigate the effects of the health crisis. Before the pandemic, this review process, which involves analysis by editors and researchers with specialist knowledge of the subject in question, took an average of 100 days. The research group looked at papers added to the Pubmed database between January 30 and April 23, finding that 367 articles on the disease were published in journals every week.

Led by hepatologist Jeffrey Lazarus, from the Institute for Global Health at the University of Barcelona, the study highlights the damage that such a fast review process can cause to the credibility of science when articles with errors or fraud are inadvertently published. The Retraction Watch website says 30 COVID-19 studies have been retracted by scientific journals or removed from preprint repositories, a small fraction of the more than 40,000 published, but with the potential to have a significant impact. To reduce the risks and potential damage, Lazarus and his colleagues made a series of recommendations aimed at researchers, journal editors, and the authorities. One suggestion is that editors use checklists to assess the robustness of the study’s methodology or statistical analysis, to verify whether the results make sense or are compatible with the original proposal. Such checklists are not a novel idea, but the authors found evidence that they are often not being properly utilized in the fast-track review processes currently being implemented due to the pandemic.

The STROBE initiative (Strengthening the Reporting of Observational Studies in Epidemiology) should be followed when possible, according to Lazarus’s group. Created in 2009 by the University of Bern, Switzerland, it provides a list of information that should be included in epidemiological study research designs to give reviewers confidence in the quality of the results, such as participant selection criteria, a description of the statistical methods, and efforts made to prevent bias. Another example is CONSORT (Consolidated Standards of Reporting Trials), designed to monitor the results of clinical trials.

In April, the European Association of Science Editors released a public statement on ensuring proper care is taken when reviewing articles related to the pandemic. One suggestion it proposes is that papers include a statement from the authors declaring the limitations of their findings—when they are based on computational models and not on studies of living beings, for example, or when they are supported by a small number of patients. It also recommends providing access to the raw data behind the study.

The study by Lazarus and his team also highlights the reviewer selection process as a point of vulnerability: those responsible for analyzing manuscripts must be prepared to perform a quick but rigorous review. The good news is that there are tools that have proven effective at helping reviewers with little experience, such as COBPeer, which uses checklists formulated by the CONSORT initiative. “The challenge of disseminating a large volume of research during a global health crisis must be recognized as a call for innovative thinking and solutions that guarantee continued confidence in the scientific publication process,” wrote Lazarus and his colleagues. “The lessons learned will help to enrich scientific communication in the coming years.”

The group also suggests investments in curating scientific information on COVID-19, listing a number of approaches that should be encouraged, such as large databases of articles on the novel coronavirus. One such example is LitCovid, created by the United States National Library of Medicine, which holds more than 30,000 published papers and can be searched by category (case studies, scenarios, prevention, etc.) and by country mentioned. Another is the novel coronavirus database run by the World Health Organization (WHO), which has more than 40,000 articles. There are also initiatives endeavoring to evaluate and summarize our accumulated knowledge of the disease. The Cochrane Library, for example, has created a COVID-19 section with analyses and reviews of published articles, while the Johns Hopkins Bloomberg School of Public Health brought together a team of 40 experts to perform an in-depth analysis of studies that show promising results or receive a lot of attention from the press and on social media.

Other initiatives are in development. The MIT Press has announced a new scientific journal call Rapid Reviews: COVID-19, dedicated to reviewing preprints on the novel coronavirus and highlighting important studies, as well pointing out those with errors or biases. Preprints are papers that have not yet undergone peer review, but whose preliminary results are shared on public repositories for analysis and critique by other scientists—during the pandemic, thousands of preprints on the novel coronavirus have been shared online. “Preprints have been a tremendous boon for scientific communication, but they come with some dangers, as we’ve seen with some that have been based on faulty methods,” Nick Lindsay, director of journals at the MIT Press, told the StatNews website. The new journal will use an artificial intelligence system developed at the Lawrence Berkeley National Laboratory to categorize preprints by discipline and degree of novelty.

Republish