Imprimir Republish

Scientometrics

Who Performs Better?

Study shows that grant recipients whose proposals are selected through a peer review process publish more articles in higher impact journals

VERIDIANA SCARPELLIA study published in the January 2015 issue of the journal Scientometrics, conducted by the Study Group on Organization of Research and Innovation (Geopi) at the Geosciences Institute, University of Campinas (Unicamp), measured the impact of different selection methods used by funding agencies in Brazil on the performance of grant recipients. The study’s main conclusion, based on the data from a project that assessed FAPESP’s grant programs, is that Brazilian grant recipients at the basic science, master’s and PhD levels whose proposals were approved after being subject to an individual evaluation conducted by members of the scientific community – the peer review system, as occurs at FAPESP – published more articles in higher-impact journals than those who, having had their grant applications rejected by FAPESP, received funding through grant quotas made available to universities by the National Council for Scientific and Technological Development (CNPq) and the  Brazilian Federal Agency for the Support and Evaluation of Graduate Education (Capes). The study also found, however, that the performance by the two groups tended to become more similar during five years following completion of their PhD studies, as they began to produce research independently.

The study shows that PhD students supported by FAPESP published on average 37% more articles than candidates whose projects were rejected by FAPESP and who received funding from another source during the same time period. “One hypothesis is that the peer review model acts as a classic academic filter for evaluating research proposals,” explains Sérgio Salles-Filho, one of the article’s authors and assistant coordinator of FAPESP’s Special Programs. The study evaluated almost 55,000 articles published by more than 8,500 researchers who had received funding at the basic science, master’s and PhD level from FAPESP, CNPq and Capes between 1995 and 2009. To conduct their analysis, the authors evaluated the academic careers of grant recipients using CNPq’s Lattes Platform, and based on responses to online questionnaires prepared specifically for the study. The contingent was divided into two groups: the first consisted of candidates whose proposals were approved by FAPESP evaluators, and the second was a control group of students whose grant requests had been rejected by the Foundation but who had received awards from a federal agency. Comparing the two groups was possible thanks to a methodology that matches characteristics in the first group with those of the control group in a somewhat experimental design (see Pesquisa FAPESP Issue No. 224).

The impact of the peer-review system on scientific output was noted in different fields of knowledge. At the master’s level, FAPESP grant recipients published 24% more articles in agrarian sciences and 25% more in engineering. There was no statistically significant difference in the number of publications in other subject areas. As to the different levels of impact of the journals, FAPESP’s former master’s level grant recipients published 13% more in higher impact periodicals, notably on topics related to agrarian science (24% more) and biology (16% more).

With regard to the PhD level, former FAPESP grant recipients also published more articles in almost all subject areas (see Infographic). However, with respect to the impact of the journals in which they appeared, only the humanities showed an increase (87% more articles). There was no significant difference in the other areas, with the exception of the social sciences, in which FAPESP former grant recipients published about 67% less in lower-impact journals  when compared with the control group.

The study found that the professional careers of PhD students in the social sciences followed an odd path. “As we saw in the study, these researchers seemed less inclined to pursue post-doctorate studies and were more employable than students in other areas, demonstrating less involvement in the world of research,” according to Adriana Bin, also a professor at Unicamp and principal author of the article.

The study’s authors observed that after receiving their PhDs, the two groups of researchers tended to publish at about the same level in terms of the number of articles published and the level of impact of the scientific journals in which they published. To reach this conclusion, the study looked at the five years prior to the year of PhD dissertation defense and the five years subsequent to that defense. (see Infographic). One fact that stood out was that FAPESP grant recipients begin to publish at a higher rate soon after receiving their PhDs, while the control group saw a small decline in the number of published articles. “One theory that explains this finding is that FAPESP grant recipients get involved more quickly in research activities after they receive their PhDs,” explains Bin. Of the researchers who received their grants from FAPESP, almost 40% linked up with a post-doc program as soon as they received their PhDs. Among those who received grants from other sources, the rate was 30%. Rogério Meneghini, science coordinator for the virtual library SciELO Brasil, was surprised by this outcome. “In Brazil, the vast majority of researchers do not do a post-doc right after their PhD program, and with the increase in the number of public universities established in recent years, most PhDs are putting their efforts into teaching,” he says.

Counterweight
Known as an institutional model, the system used by federal agencies is based on the performance of each institution in a national classification of post-graduation programs – institutions whose programs receive the highest marks get more grants. It is up to the post-doc program or the university itself to decide how the grants will be distributed. They use criteria that vary on a case-by-case basis, such as the candidate’s résumé, social and economic status, quality of proposed project, or some combination of the three. Peer review, adopted by FAPESP, can also take performance indicators into account, but it makes a thorough assessment of the candidates as individuals and of their proposals, in addition to the experience reported by advisors or supervisors. The basic difference between the two systems, according to Salles-Filho, is that in the peer review approach, the agency that issues the grants controls the proposal evaluation process by using qualified researchers who make recommendations to the institutions as to whether or not to award the grant. In the other system, the decision is decentralized. The institutional model also uses a specific kind of peer review. “It is not a given that peer review necessarily ensures that the best candidate will be chosen. However, this selection method is a time-honored rule of science and continues to be the most respected model,” says Salles-Filho.

The first post-doc grant programs began to appear in the U.S. and Europe after World War II, at a time when governments were getting increasingly involved in financing scientific research, specifically with large investments in technology and innovation. From the outset, the main method used to award grants was based on peer review. An article published in the journal Science in 1977 by researchers at Columbia University stressed the importance of this model for the National Science Foundation (NSF), the main agency funding basic research in the United States at the time. The article refutes a criticism made at the time, which was that assessors gave preference to proposals from well-known researchers who had more publications to their name. The authors argue that there was no empirical data to show that the peer review method used by NSF was subjective.

The peer review method continued to be favored within the scientific community and was adopted by other important funding agencies, such as the National Institutes of Health (NIH), the main biomedical research funding agency in the United States, and the United Kingdom’s Research Councils. Some of these institutions included in their internal evaluations studies to verify the impact of peer review on the scientific output of their grant recipients. In 2002, for example, the NSF evaluated their post doc grant programs and reached the conclusion that the peer review system had a positive impact on the academic output of its grant recipients. In some fields, like mathematics and economics, students who were awarded NSF grants published about three more scientific articles than did the control group, which consisted of recipients of grants from agencies that did not employ a peer review system. Other studies, however, demonstrate the difficulty in confirming the relationship between the method used for selecting grant recipients and their production in scientific publications. John Rigby, a professor at the University of Manchester in England, published one such study in 2013. Rigby says that whether a project proposal is accepted by a funding agency does not predict the research’s future impact.

Maintaining an army of evaluation personnel is another challenge faced by funding agencies. In 2009, the Research Assessment Exercise (RAE) made a big effort to evaluate the quality of research in the United Kingdom and replaced its method, which was largely based on peer review, with a new system, the Research Excellence Framework (REF), which, although it does not entirely abandon peer review, makes greater use of bibliometric indicators, such as the number of citations of publications authored by scientists (see Pesquisa FAPESP Issue No. 156). The government’s objective was to reduce costs and make the assessment system more flexible. The change in approach split the British scientific community. “Taken in isolation, citations have repeatedly shown themselves to be a dismal measure of research quality,” argued an editorial published at the time in the journal Nature about the announced changes, citing a 1998 study that compared the results of two studies run on a set of articles on physics. One used metrics like citations, and the other based its assessment on peer review. The discrepancies affected 25% of the articles reviewed. “Policy makers don’t have any other option except to acknowledge that review by experts plays an indispensable role in the evaluation process,” according to the editorial in Nature.

Scientific Article
BIN, A. et al. What difference does it make? Impact of peer-reviewed scholarships on scientific production. Scientometrics. v. 102, n. 2, p. 1167-88. 2015.

Republish