An article published in the Taylor & Francis journal Journalism Practice in August revealed biases in how science reporters select topics for their stories, which may impact the public’s perception of academic production. The aim of the study was to map the procedures a group of 23 journalists used to identify articles published in so-called predatory journals (those that publish low-quality papers in exchange for money) and to understand how these professionals decide whether or not a journal is a reliable source of information.
The conclusion was that most follow strategies that would be classified more as conservative than rigorous. Rather than evaluating the robustness of a study or the quality standards of the journal, the journalists sought to minimize their own risk by favoring articles published in established journals with which they are already familiar, overlooking work published in newer journals. One of the participants said he would never encounter predatory journals because “the ones that I consult are pretty well established.” Another explained: “I don’t need to go that far to get information… I don’t look beyond the mainstream journals.”
The study also revealed that journalists can find it difficult to distinguish between predatory and open-access journals, demonstrating skepticism about publications that make their content freely available online by charging author fees. Respondents expressed hesitation toward journals based in the Global South—which includes developing countries in Latin America, Asia, and Africa—preferring titles from more developed nations.
The qualitative research involved 23 journalists reporting on medicine, science, and the environment in six countries: Switzerland, Denmark, England, Mexico, Canada, and the USA. The authors of the article—Alice Fleerackers of the University of Amsterdam, Netherlands, Juan Pablo Alperin of Simon Fraser University, Canada, and Laura Moorhead of San Francisco State University, USA—selected journalists from Europe and North America only. Most of them were freelancers with 10 or more years of experience in the profession.
The three authors warn that the criteria adopted by the reporters could have negative impacts. One potential issue is the perception that it is perfectly safe to report on content from traditional titles. “These journals publish lots of high-quality, important research that the public should know about. But they also publish problematic studies, like any journal can,” said one of the authors, Alice Fleerackers, in an interview with the British periodical Technology Networks.
Another problem is linked to the assumption that journals that charge fees to publish content online, such as open-access titles, have a higher risk of being predatory. It is true that dishonest journals often adopt aggressive fee-charging strategies, but many open-access titles follow good editorial practices. As the study notes, this type of article selection bias could deprive readers of learning about research published in high-quality open-access journals and ignores the fact that more and more journals, including some of the most traditional titles, are migrating to business models based on charging publication fees from authors instead of subscription fees from readers.
The survey participants said that when assessing the reliability of a journal, one of the factors they consider is spelling, grammar, and typos in published articles, based on the idea that high-quality journals carefully edit everything they publish. “But sometimes, the responsibility for proofreading falls on the authors of the study, not the journal editor,” Fleerackers points out. “Since the dominant language of journal publishing remains English, this is a disadvantage for scholars writing in a language that is not their first (or even their second), like many scholars in the Global South.” She argues that reporters need to be trained in “critical research literacy” so that they can base their choices on the quality of the science rather than the prestige of the journal in which it is published. “All of us can be impacted by biases, like reputation and prestige, when making decisions. What’s important is that journalists do not let these biases overrule their ability to make more careful, critical decisions,” she said.
German designer Andreas Siees, who studies science communication at Hochschule Bonn-Rhein-Sieg University of Applied Sciences in Germany, has sought other indicators—other than spelling errors—to distinguish between predatory journals and reliable ones. In August, he published an article in Scientometrics in which he compared the visual characteristics of papers published in reputable and predatory journals. His goal was to help readers, including journalists, to distinguish one from the other. His analysis, which encompassed 443 legitimate open-access publications and 555 predatory ones, assessed metadata, layout elements (typography, white space, page size, and figures), and other visual attributes.
Siees found several differences. The average length of potentially predatory articles was 35,300 characters, just over half the 66,800-character average of legitimate ones. Trustworthy papers also used smaller typefaces and a wider variety of fonts than dishonest publishers, which generally used preinstalled system fonts, such as Arial, Times New Roman, Calibri, and Cambria. It is not always easy, however, to spot the suspicious signs with the naked eye, because predatory journals try to mimic the visual identity of established publishers. In the dataset that Siees examined, Elsevier’s layout was the most frequently copied, followed by Springer’s. “The principal visual distinctions between predatory and legitimate publications that our study identified lie in subtle design characteristics,” he wrote in his paper.
Republish
