The weight and influence of predatory journals—publications that publish articles in exchange for money without assessing their quality—have become the subject of scientific integrity research. A number of papers published in recent months have highlighted efforts to properly identify these publications and understand their impact on scientific communication. In September, doctors from the Ottawa Hospital Research Institute in Canada reported on two studies that aimed to establish a universally accepted definition of what constitutes a predatory journal. Both papers, published on the MedArXiv repository, were written by Samantha Cukier of the institute’s Centre for Journalology, a body set up to encourage good publishing practices among its researchers.
One of the studies analyzed 93 different lists of deceitful journals: 53 from online databases, 30 from university websites, and 10 from YouTube videos. It may not seem it, but classifying these journals safely is a major challenge. Although there are journals that appear on all of the lists, only three of the lists were found to have been based on empirical research. The most famous list was put together by librarian Jeffrey Beall of the University of Colorado, USA, and is based on 54 different criteria, including flawed publication practices, publisher profile, and factors related to ethics and integrity. Even this level of complexity was not enough to save Beall’s work, which listed more than 8,000 publications and publishers, and was taken down by the author after he received threats of legal action. Another list of predatory journals, created by American academic analytics firm Cabell’s International, available only to subscribers, uses 65 criteria and gives objective reasons for each publication included, to avoid legal issues.
The second study by the Canadian group attempted to reach a consensus on what makes a journal predatory. Forty-five researchers attending a conference on predatory publications at the University of Ottawa in April answered a questionnaire about the characteristics of these journals. There was a shared understanding regarding abusive commercial strategies, such as sending persuasive emails encouraging people to submit work, and flaws, such as an absence of article retraction policies. For some aspects, there was a broad agreement but no consensus, such as an amateur feel to the journal’s website and a lack of affiliation with the Committee on Publication Ethics (COPE), a UK-based forum of journal editors that provides guidance on good practices.
Difficulty verifying that publishers really are associated with the institutions they claim to be is another factor that can help identify predatory journals, according to the respondents. The problem is common. Physician and educator Selcuk Besir Demir, a researcher at Firat University in Turkey, published an article in the Journal of Informetrics in late 2018 in which he studied the characteristics of 735 predatory journals on Jeffrey Beall’s list, from 52 countries. Demir found that while most are officially based in countries such as India, the USA, Turkey, and the UK, many lie about their actual location—the Internet Protocol (IP) address of 119 of the journals did not match the country from which they claimed to operate. Demir investigated the identities of the journal editors and found that although some of them do actually exist and work at universities, at least 80 were completely fictional. He also noted that most of the researchers who published in journals on the list were from countries with a low reputation in terms of academic research, such as Nigeria, Turkey, Botswana, Jordan, Malaysia, Pakistan, and Saudi Arabia.
What impact do predatory publications have on scientific communication? A study by information scientist Richard Anderson, a researcher at the University of Utah, USA, found that the influence is small but should not be overlooked. Anderson evaluated the extent to which articles published in predatory journals are being cited in legitimate scientific papers indexed in international databases. He selected seven predatory biomedical publications and analyzed whether their articles were cited in the Web of Science database, which has over 90 million documents, ScienceDirect, which offers access to 15 million journals, and the 200,000 articles published in the open-access journal PLOS ONE. According to the author, there is no doubt about the predatory nature of the chosen publications. Four of them—the American Journal of Medical and Biological Research, the International Journal of Molecular Biology: Open Access, the Austin Journal of Pharmacology and Therapeutics, and the American Research Journal of Biosciences—failed a 2017 test that involved the submission of a paper with no scientific meaning or basis. The content of the article in question was laughable: it described so-called midi-chlorians, intelligent life forms that live symbiotically within the cells of some living beings. These fictional microscopic entities are actually part of the Star Wars series, in which they are responsible for the power of the Jedi Knights. The author of the paper was a certain Lucas McGeorge, alluding to filmmaker George Lucas. The four journals mentioned above published the article without requesting any corrections. All they asked for was a publication fee.
In the conclusions of his study, Anderson showed that predatory journals had a very low ability to “contaminate” articles subjected to a rigorous peer review. In total, he found 100 citations on the Web of Science and eight on ScienceDirect. In PLOS ONE, there were 17 citations of articles from one of the 7 predatory publications—the International Archives of Medicine—but all were from before 2014, when the journal had a different owner and was more respected. Of the seven journals analyzed, two were not cited in the databases at all.
“The predatory journals under examination have rarely been cited in legitimate publications,” Anderson wrote in a recent article on The Scholarly Kitchen website—he had already presented the paper at the 6th World Conference on Scientific Integrity, held in Hong Kong in July (see Pesquisa FAPESP issue no. 281). The bad news, says the researcher, is that many articles from these publications have been cited in scientific papers that are not included in the major databases. One of the journals analyzed has had 36% of its articles cited in mainstream academic literature.Republish