A study published on the PsyArXiv platform shows that the proportion of studies that fail to confirm their hypotheses is greater than previously estimated—and suggests that ignoring or omitting these results can overemphasize positive results, leading to biases. Psychologists Chris Allen and David Mehler, from the University of Cardiff, UK, have for the first time evaluated a practice that has been gaining ground in the fight for more transparency in the scientific process: the publication of so-called registered reports—where research methods and analysis plans are peer-reviewed before the study begins. The specialist journals that publish these papers commit to do so even if the results are null or inconclusive, allowing scientists to compare the research objectives with the outcome.
The researchers analyzed 113 such reports in the biomedical and psychological sciences and found that 61% of the 296 hypotheses were not confirmed by the results. According to the two authors, the proportion generally cited by existing literature is much lower, between 5% and 20%. There are now about 140 journals that publish papers with pre-registered research objectives and release the results whether they are positive or not. Many of these journals document the protocols used in clinical trials of new drugs and therapies, which are required by US law to be recorded before the research is conducted. Officially registering the objectives of a scientific experiment can help to prevent misconduct, such as the deceitful adaptation of hypotheses to research results, and helps to prevent null results from being forgotten.
Failing to publish unexpected results is not seen as a dishonest practice—it is often difficult for authors to find journals willing to publish findings of little impact. “Registered reports not only guard against questionable practices, but can also increase the chance of publication, as they offer a path to publication irrespective of null findings,” Allen and Mehler wrote in their study, which was published as a preprint. The two authors now want to broaden their studies of registered reports by examining a larger number of research protocols. Although the omission of null data is not considered misconduct, overemphasis or selective publication of positive results can make scientific findings less trustworthy and increases the number of studies whose results, despite appearing promising, cannot be reproduced by other researchers.
An article published in the journal Psychological Medicine in November described some of the main aspects of this problem. The authors, from the universities of Groningen in the Netherlands and Bristol, UK, analyzed 105 antidepressant clinical trials registered with the US Food and Drug Administration (FDA). Of this total, 53 reached conclusions considered positive by the FDA, while 52 led to null or inconclusive results. Despite the 50/50 split, 90% of the trials with positive results were published in scientific journals, compared to just 48% of those with null results. “Even thorough reviews of the literature would find that nearly all studies were positive, and those that were negative were ignored,” American pediatrician Aaron Carroll, a professor at Indiana University’s School of Medicine, wrote in an opinion piece published in The New York Times in September. Other problems were also identified in some of the published studies with null results, such as the omission of unfavorable data and alteration of the original hypothesis to match the outcome. In some cases, secondary results with no statistical significance were presented as positive and unfounded trends in the data were highlighted.
Selective publication of favorable findings can compromise attempts to reproduce research
Registered reports are also used in reproducibility studies, a type of experiment performed solely to verify the robustness of findings obtained in previously published research. One such study won the inaugural preclinical network data prize, created by the European College of Neuropsychopharmacology (ECNP) to recognize the contribution of research that has achieved negative results. Laura Luyten and Tom Beckers, researchers at KU Leuven University in Belgium, were awarded the €10,000 prize at an ECNP conference held in Barcelona in October. In an article published in the journal Neurobiology of Learning and Memory last year, they showed that a type of behavioral training performed on rats to attenuate memories of fear was based on false parameters and presented inflated data. The training procedure, proposed in 2009 by a group led by psychologist Marie Monfils, from the University of Texas at Austin, appeared to have potential for treating anxiety in humans, but no researcher since has been able to achieve the same results. The Belgian researchers attempted—in vain—to reproduce three key experiments conducted by Monfils, using the same original procedures, but found no significant differences between the responses of animals submitted to the training and those in the control group.
Thomas Steckler, a researcher at pharmaceutical company Janssene, which is a member of the ECNP, argues that null results are very important. “Science is historically self-correcting. This process is most effective when both positive and negative results are published,” he said, according to the ENCP website. “Unpublished data is effectively a waste of valuable real and human capital, particularly in the face of the reproducibility challenge currently discussed in various fields of science. Over 50% of published biomedical data cannot be reproduced.”
ALLEN, C. and MEHLER, D. Open Science challenges, benefits and tips in early career and beyond. PsyArXiv preprints. Online. Nov. 12, 2018.
VRIES, Y. A et al. The cumulative effect of reporting and citation biases on the apparent efficacy of treatments: The case of depression. Psychological Medicine. Vol. 48, no. 9, pp. 1552–69. Nov. 2018.
LUYTEN, L. and BECKERS, T. A preregistered, direct replication attempt of the retrieval-extinction effect in cued fear conditioning in rats. Neurobiology of Learning and Memory. Vol. 144, pp. 208–15. Oct. 2017.