Imprimir Republish

Good practices

Preventing further crises

Report proposes changing research practices to reduce the publication of studies whose results have not been replicated

Romolo

The Royal Netherlands Academy of Arts and Sciences (KNAW) has released a document proposing changes in research practices to address what has been called the “reproducibility crisis”; a succession of scientific papers have fallen into disrepute because their results were not confirmed in subsequent experiments. The January report, entitled Replication Studies—Improving Reproducibility in the Empirical Sciences, makes recommendations that aim to improve the rigor of scientific research and support researchers interested in verifying results obtained in previous studies. One proposal suggests offering incentives for funding replication studies, following an example set by the Netherlands Organization for Scientific Research (NWO), which last year allocated €3 million to a pilot program for projects of this nature. The report also recommends putting a greater emphasis on training scientists and students in areas such as research design and statistical analysis, and encouraging scientific journals to publish studies that obtained null results or could not confirm tested hypotheses.

“Scientific knowledge can only grow if researchers can trust the results of earlier studies,” José van Dijck, a media and culture researcher at Utrecht University and president of the Royal Academy, wrote in the preface to the report. According to the research organization, producing reliable data is essential to avoiding wasted research resources and ensuring public confidence in science. “The report concludes that replication studies should be conducted more frequently and systematically, and that this requires a joint effort from funding agencies, scientific journals, institutions, and researchers,” said Van Dijck.

The picture of the “reproducibility crisis” presented in the report demonstrates the scale of the problem. In the search for new cancer drugs, pharmaceutical company Amgen tried to confirm the findings of 53 published preclinical studies that appeared to show great potential. They were only able to corroborate 11% of the results. Bayer made similar efforts to validate data on potential targets for new drugs obtained by 67 research projects, and was only successful in 25% of cases. An international collaboration formed to investigate studies in experimental psychology, a field that has been hit by a number of scandals involving data manipulation and fraud, was only able to substantiate the results of 36 of the 100 articles evaluated. At the end of last year, the US National Academies of Sciences, Engineering, and Medicine created a 15-member committee to study strategies for preventing the publication of unconfirmed studies—its conclusions are expected to be released in 2019.

Although the crisis is well-known in medicine, the life sciences, and psychology, the report suggests that other fields should investigate the extent of the problem within their communities. “When we look at the existing analyses of what causes these reproducibility problems, it’s quite clear that the same causes must occur elsewhere,” Johan Mackenbach, a public health researcher at the Erasmus Medical Center in Rotterdam and head of the panel that organized the report, told Science magazine. The generic causes highlighted by the document include pressure for researchers to publish novel or high-impact findings as quickly as possible, to avoid putting themselves at a disadvantage in the competition for funding and job opportunities.

The report lists 20 different reasons why a researcher may arrive at non-reproducible results. Most are related to methodological issues, such as bias control failures, conclusions based on small sample sizes, or a lack of rigor in statistical analyses. At the root of the problem there are also issues related to how results are reported, such as selecting data to support the research hypothesis, omitting negative results, and modifying the original proposal to reflect the conclusions.

Fraud is the most extreme way of generating invalid results, but many standard scientific processes can also threaten reproducibility, such as unexpected human/technical errors or undetected changes in sample conditions. Not all unconfirmed studies are wrong—there are cases where it is not possible to replicate results because the original researcher failed to provide essential details of the experiment. To avoid such situations, the report recommends that journals and funding agencies should require researchers to make their raw research data and methodologies available in public repositories.

Practical proposals for how to prevent non-reproducibility include asking researchers to preregister their hypothesis, research protocol, and analysis plan before their study begins. This type of precaution is already required by some funding agencies. In a joint initiative by the UK’s Royal Society, several journals systematically publish registered reports, a form of journal article in which methods and proposed analyses are pre-registered and peer-reviewed prior to research being conducted. The journal then commits to publishing the results, even if they are null.

The KNAW report is explicit about the importance of publishing studies with null results. It suggests that funding agencies should encourage researchers to report such findings and journals to publish them. “Rather than reward researchers mainly for ‘high-impact’ publications, ‘innovative’ studies and inflated claims, institutions, funding agencies, and journals should also offer them incentives for conducting rigorous studies and producing reproducible research results,” the document states.

Republish