Some 3,000 computer science researchers and students have signed a manifesto pledging not to submit articles to a new journal due to launch in early 2019, nor to participate in its peer review process. The journal in question is Nature Machine Intelligence, by the Springer Nature group, which plans to publish studies on artificial intelligence, machine learning, and robotics. There is a simple reason behind the proposed boycott: scientific output in computer science, and particularly in artificial intelligence, has its own particular publication model where findings are shared and debated at conferences and then published on open-access repositories, such as ArXiv. This system functions in opposition to the traditional method of scientific publication, where access is available only to those willing to pay, and it also differs to the more modern practice—which is growing in popularity—of offering open access but charging the author a fee to cover publication costs.
“We see no role for closed-access or author-fee publication in the future of machine learning research,” the manifesto states. “In contrast, we would welcome new zero-cost open-access journals and conferences.” The statement was endorsed both by scientists from research institutions, including Harvard University and the Massachusetts Institute of Technology (MIT) in the US, and by experts associated with companies such as Google, Microsoft, Facebook, and IBM. Apple’s name, however, was conspicuously absent from the list.
Signatories include computer scientists Yann LeCun, a professor at New York University and director of artificial intelligence research at Facebook, and Yoshua Bengio, from the University of Montreal, Canada, who together pioneered research on neural networks. “The machine learning community has been great about open access to published research in many venues, and reversing this trend to have work appear in new closed-access forums does not seem like a great idea,” tweeted Jeff Dean, director of Google AI, who also signed the statement. According to Scopus data compiled by the journal Science, the number of scientific papers on artificial intelligence grew tenfold between 1996 and 2016, above the average for computer science topics in general, which increased by six times in the same period. Brazil is following the same trend: researchers have been publishing in the field since the 1990s and scientific output in the country more than doubled between 2010 and 2016 (see graph).
Professor Thomas Dietterich, professor emeritus at Oregon State University, USA, used his Twitter account to invite colleagues to endorse the statement. He received a response from the journal itself, which struck a conciliatory tone: “We feel Nature Machine Intelligence can coexist, providing a service—for those who are interested—by connecting different fields, providing an outlet for interdisciplinary work, and guiding a rigorous review process.” The spokeswoman for Springer Nature in London, Susie Winter, issued a statement defending the model: “We believe that the fairest way of producing highly selective journals like this one and ensuring their long-term sustainability as a resource for the widest possible community is to spread the associated costs among many readers.”
The protest manifesto has had a significant impact, mobilizing an emerging research field in defense of open access and opposing one of the largest science media groups in the world. But the move came as no surprise to those following the progress of artificial intelligence research and the environment in which it is produced. The same debate raged in 2001, when computer scientist Leslie Kaelbling, from MIT, launched the open-access Journal of Machine Learning Research (JMLR) as an alternative to the prestigious subscription journal Machine Learning, published by the Kluwer Academic Press. At the time, many members of Machine Learning‘s editorial board resigned and transferred to JMLR.
A perception that the traditional scientific publishing process is too slow for such a dynamic field of knowledge led computer scientists to adopt a model based around open-access conferences and repositories, in which peer review is performed by any researcher that wishes to make a contribution after results are shared. The model is still evolving. In 2013, computer scientist Andrew McCallum, from the University of Massachusetts in Amherst, USA, created the website Open Review, where authors can publish manuscripts presented at conferences and invite colleagues to comment on them. Major artificial intelligence conferences soon started using the website.
Rui Seabra Ferreira Júnior, president of the Brazilian Association of Scientific Publishers, observes that scientific publication models are going through a transitional phase and it is still unclear how things will progress in the future. “Peer review has existed for over 300 years and there is no evidence that it will lose its importance any time soon. But in some of the more theoretical disciplines, such as certain areas of physics and computer science, it is more feasible to share new concepts on an open-access repository and open them up to discussion and validation among colleagues,” he says. “In applied fields, peer review is essential—in biology or medicine, for example, researchers would need to reproduce the experiments published on these repositories in order to validate them, which would be unworkable.”
Physicist Paul Ginsparg, a researcher at Cornell University and founder of the ArXiv repository, applauded the computer scientists’ manifesto, but was skeptical of its ability to influence the scientific publishing system as a whole. “I personally have no animus towards the subscription model,” he told the journal Science. He believes that those who have signed the statement have an unrealistic view of zero-cost publishing. “Repository servers are cheap, but systematic quality control is labor-intensive, and that costs real money.”
System gathers information on articles published by Brazilian researchers in subfields of computer science
Por: Rodrigo de Oliveira Andrade
Researchers from the federal universities of Minas Gerais (UFMG) and Uberlândia (UFU) have created a system that provides comparative data on the scientific output of computer science departments at higher education institutions in Brazil. Launched in January, CSIndex provides information that can be used to assess the performance of research groups and to help students choose master’s and PhD courses. The service gathers and organizes data on all scientific output from departments in 16 subfields, generating a score for each one. The Institute of Mathematics and Statistics at the University of São Paulo (IME-USP) has the highest score in artificial intelligence. In software engineering, the Federal University of Pernambuco (UFPE) ranks highest, while the computer science department at UFMG has the highest score for databases.
The information is collected by monitoring articles submitted at conferences indexed on international databases. In computer science, more importance is placed on publishing papers at conferences than publishing in scientific journals. CSIndex monitors articles published in the annals of 162 conferences on a range of subfields, such as software engineering, programming languages, computer architecture, and databases.
The system uses an algorithm to retrieve information from the Digital Bibliography & Library Project (DBLP), one of the leading computer science bibliographic repositories, hosted at the University of Trier, Germany. The DBLP indexes more than 3 million articles from 1.7 million authors and is updated monthly. “CSIndex filters the studies by Brazilian authors, classifying them according to the subfields,” explains computer scientist Marco Tulio Valente, from UFMG, who created the system with computer scientist Klérisson Paixão, from UFU. It then calculates a score for each department. CSIndex monitors the scientific output of 818 computer scientists from public and private universities in Brazil, as well as federal institutions.
The most prestigious conferences in computer science have a very rigorous selection process and generally accept less than 20% of the articles submitted. Papers published at the 34 most respected conferences receive a weight of 1, while those published in the others are assigned a weight of 0.33. “The score given to each department is calculated by combining the number of articles published by its researchers at the ‘top’ conferences and others monitored by CSIndex,” explains Valente.
The idea, he says, is for CSIndex to act as an independent platform that could complement current assessment systems, such as the evaluation carried out every four years by the Brazilian Coordination for the Improvement of Higher Education Personnel (CAPES), which monitors the quality of graduate programs.
Computer scientist André Sampaio Gradvohl, from the School of Technology at the University of Campinas (UNICAMP), believes it is important for CSIndex to consider other repositories as well as the DBLP. “I would recommend expanding the number of repositories to include papers published in scientific journals,” says the researcher, who was not involved in designing the system.
For computer scientist Filipe Saraiva, from the Federal University of Pará, who was also not part of the group that created CSIndex, the freedom to check the research quality of other departments opens opportunities for new collaborations. “It is interesting to observe metrics related to computer science research conducted in Brazil, and even more so when they are based on a filter that prioritizes high impact publications,” he says.