Imprimir Republish

Good practices

High sensitivity

Social psychologists say that controversial research topics may be targeted by covert censorship, calling for more transparency in science

Wildpixel / Getty Images

An international group led by social psychology experts published a paper in the US National Academy of Sciences journal PNAS discussing the harmful consequences of obstructing the communication of controversial or potentially dangerous academic ideas that are based on scientific evidence obtained with methodological rigor. The situation is defined by the authors as “scientific censorship.” In some cases, publishing restrictions may be justified. There is a strong argument, for example, not to publish research results that could pose a concrete threat to life or society, such as data that would facilitate the manipulation of deadly pathogens or explanations of the manufacturing process of nuclear weapons, to name two extreme examples.

But these are not the cases of most concern to the authors, who are affiliated with universities in the USA, Canada, Italy, and Australia. They argue that the scientific community and the public are growing increasingly aware of a subtler and more covert type of censorship. The problem can manifest in difficulties publishing well-founded articles that propose eccentric ideas or theories contrary to consensus; in the labeling of a study as “pseudoscience” before it has been duly refuted with counterevidence; or even through mentors and department heads advising junior researchers about the risks of publishing papers on politically sensitive topics or subjects that could offend minority groups. The authors classify the latter as “benevolent censorship,” the aim of which is to protect the careers of colleagues.

“Our analysis suggests that scientific censorship is often driven by scientists, who are primarily motivated by self-protection, benevolence toward peer scholars, and prosocial concerns for the well-being of human social groups,” states the article, published in the Perspectives section of PNAS, which focuses on critical problems in science. The study was led by Cory Clark, a social psychologist from the University of Pennsylvania. Clark is director of the Adversarial Collaboration Project, which promotes dialogue and cooperation between researchers with conflicting theoretical and ideological perspectives. The goal is to produce more accurate research results, free of bias and ambiguity. One of the aims of the project is to study and curb confirmation bias—a person’s tendency to lend credence to statements that support their prior beliefs, not subjecting them to the same rigor they apply to ideas that differ from their own.

Even the retraction of articles, one of science’s primary self-correction mechanisms, can result in the perception of censorship, according to the authors. One example mentioned by Clark et al. was a 2020 article in the journal Nature Communications that claimed that female students and scientists perform better academically when advised or supervised by male mentors. The study, by researchers at New York University’s Abu Dhabi campus in the United Arab Emirates, was harshly criticized as sexist and was ultimately retracted by the journal due to methodological issues.

According to the retraction note, the conclusions were based on the analysis of senior-junior author pairs, and it is unreasonable to extrapolate the performance of coauthors to the relationship between mentors and their students. The authors of the article accepted the retraction and apologized, although they reaffirmed the importance of their work to coauthors: “Many women have personally been extremely influential in our own careers, and we express our steadfast solidarity with and support of the countless women who have been a driving force in scientific advancement.”

An editorial in Nature Communications addressed the case, reinforcing the journal’s commitment to gender equality in science and announcing a review of publication policies for manuscripts related to certain sensitive topics. “We recognize that it is essential to ensure that such studies are considered from multiple perspectives, including from groups concerned by the findings. We believe that this will help us ensure that the review process takes into account the dimension of potential harm, and that claims are moderated by a consideration of limitations when conclusions have potential policy implications.” For the authors of the PNAS article, the change will introduce discretionary policy tools that go beyond the level needed to preserve ethics in research: “They concern possible, unspecified harms that could result from dissemination of findings. Editors are granting themselves vast leeway to censor high-quality research that offends their own moral sensibilities.”

It is true that controversial research findings can trigger reactions from minority or vulnerable groups and create problems for editors, but the idea of robust results no longer being published demands closer examination—and the social psychologists themselves recognize the scarcity of studies on this topic and the need to produce knowledge about it. Some recent examples show that it is possible to disseminate controversial results in a rigorous manner without hurting anyone’s sensibilities. In 2019, a large study on the influence of genetics on human homosexual behavior was published in the journal Science, indicating the existence of thousands of genetic variants common to individuals attracted to people of the same sex (see Pesquisa FAPESP issue nº 284). The results were described carefully to prevent misinterpretations. A press release accompanying the article warned about the limitations of the research and the authors discussed strategies for presenting the findings with experts in scientific communication and LGBTQ+ advocacy groups.

The concerns of the social psychology experts may sound over the top, but the recommendations they make on how to combat the perception of covert censorship are easier to accept: the key is to drastically improve transparency in scientific processes. One suggestion is to make the peer-review process as open as possible, giving authors of rejected articles detailed explanations—if they feel discriminated against or subject to censorship, they would have a strong platform to defend their points of view. The same would be true for retracted articles. The more detailed the explanation of why an article was retracted, the less room there will be for feelings of injustice. Another proposal is to create audit systems for article-review processes and to make sure that they are free from bias. Journals were also advised to create spaces where manuscript authors can evaluate the work of editors and reviewers and to report any concerns related to censorship.

Republish