Imprimir Republish

GOOD PRACTICES

Double-checking

Initiative to fund extra verification step for biomedical research results aims to prevent publication of studies that undermine trust in science

Heritage Images / GettyImages

A pilot program by the National Institutes of Health (NIH), the USA’s leading funding agency for biomedical research, is giving scientists working on studies funded by the institution the opportunity to contract an independent team to verify the validity of their research results. Leaders of projects with a “potentially major impact on health” were invited to participate in the Replication to Enhance Research Impact Initiative, which offered each selected study up to US$50,000 to pay for the services of a contract research organization (CRO). A CRO is an organization that provides specialist services to pharmaceutical, biotechnology, and general medical equipment companies to manage clinical trials. According to a report by American consultancy Frost & Sullivan, there were more than one thousand CROs operating worldwide in 2020, in a market worth US$45.8 billion per year.

The CROs will be paid directly by the NIH and will be responsible for double-checking the results and conclusions of the chosen studies using the same reagents, methods, and protocols adopted in the original experiment, then reporting whether they are valid and whether they were able to achieve the same results. The call for proposals was open until November 2024 and the selected projects will be announced in February. The number of scientists who applied was small—only 32 people signed up for the webinars announcing the terms of the initiative, from a total of more than 30,000 researchers funded by the agency. Even so, the organizers hope to announce the selection of at least six projects, which is considered a sufficient number to assess the pilot program’s design and identify problems in studies that otherwise would only be detected when other scientists try to reproduce them.

The initiative is a response by the NIH to demands from US Congress that the agency invest more broadly in replication studies. The aim is to avoid a repeat of recent episodes in which research with promising results on diseases such as cancer and Alzheimer’s disease fell into disrepute after subsequent studies were unable to replicate them due to errors or fraud. The House of Representatives recommended an outlay of US$50 million in the 2024 fiscal year, while the Senate suggested US$10 million. The final bill establishing the agency’s budget, which was passed early last year, did not define a specific amount, but gave a deadline of 180 days to implement the measures.

The program is expected to offer several benefits, according to the rationale presented by the NIH. Above all, it will save time and money for other groups interested in exploring research results, as well as validating and accelerating the adoption of new tools and technologies and increasing public trust in science. The scientists behind the original work also have the opportunity to receive additional data that could help them better understand the scope of their findings and develop applications based on them. The initiative will fund efforts to reproduce or validate preclinical trials or studies seeking therapeutic applications for knowledge generated by basic science. Because the CROs only have 12 months to complete the verification process, research involving human beings, which is more complex and costly, is not supported by the pilot program.

The NIH does not plan to release the results of the replication attempts, citing the need to protect intellectual property, since some of the studies have not yet been published. The original researchers can, however, choose to release them if they wish. Douglas Sheeley, deputy director of the agency, told the journal Science that the goal is to build a relationship of trust with scientists. “Right now, we’re focused on just making sure that we do the pilot well and learn as much as we can from it,” he said.

Sheeley points out that there are multiple reasons why an experiment may be successful for one researcher but not for another, such as differences in specifications and quality of materials, laboratory infrastructure and conditions, errors and inaccuracies in data collection and recording, and even the way sensitive equipment is handled. Such problems, he notes, are very different from those caused by ill intent or misconduct. “It can be hard to predict where challenges with replication will pop up, and there remains much to learn about how to enhance research reproducibility.” The agency has adopted several new procedures to improve reproducibility in recent years, including a policy for managing and sharing research data so that it can be easily reused and a requirement for more rigorous and transparent experiment protocols and designs.

The obstacles are not easy to overcome, as explained to Science by Sean Morrison, a biologist who specializes in stem cell research at the University of Texas Southwestern Medical Center. Morrison took part in “Reproducibility Project: Cancer Biology,” which ran from 2014 to 2021 and aimed to replicate oncology studies with promising results. Funded by the Laura and John Arnold Foundation, the project, a collaboration between the Center for Open Science and the CRO network Science Exchange, faced a series of setbacks. It was only possible to verify 23 of the 50 initially selected studies—poor cooperation from the authors and a lack of details about experimental protocols meant that it was impossible to check some of the work. In the new NIH program, the laboratories responsible for replicating the studies will maintain close contact with the original researchers, who will commit to accurately reporting all methods, protocols, and materials.

Among the cancer biology studies that could be effectively repeated, less than half (46%) achieved convergent results with the originals and only five fully confirmed the findings. Morrison warns that some of the ambiguous or unclear results obtained in the replication attempts may have been caused by the contracted laboratories not having the adequate structure and capacity to accurately reproduce the original research conditions. “They do not have the same expertise as academic laboratories, especially when it comes to advanced or specialized techniques,” he explained.

Republish