AUGUSTO ZAMBONATOThe Brazilian graduate education system has advanced over the last four years, both in terms of spaces on courses and quality indicators. Between 2013 and 2017, there was a 25% increase in the number of stricto sensu programs (the Brazilian term for master’s/doctorate courses subject to recognition and authorization of the Education Ministry). Today there are 4,175 such programs, compared to 3,337 four years ago, according to the quadrennial evaluation of graduate studies conducted by CAPES (Brazilian Coordination for the Improvement of Higher Education Personnel), which published its results on September 19. In total, there are now 2,202 doctorates, 3,398 master’s degrees, and 703 professional master’s degrees on offer. The number of international programs receiving scores of 6 and 7, the highest on the CAPES scale, rose from 412 to 465, representing 11% of the total. At the other extreme, 119 programs (3% of the total), received scores of 1 or 2, meaning their courses will no longer be accredited by CAPES. Requests for scores to be reconsidered will be reviewed before the end of the year. “These results show that the higher education system is growing and improving. Our model has shown itself capable of identifying the progress made by graduate courses and highlighting areas where institutions and programs need to improve,” CAPES president Abílio Baeta Neves said when announcing the results.
Programs were evaluated in all 26 Brazilian states—from Amapá, with only four, to São Paulo, with 894—but there are six states with a notably higher concentration of courses achieving scores of 6 and 7: São Paulo (171), Rio de Janeiro (78), Rio Grande do Sul (61), Minas Gerais (56), Paraná (20), and Santa Catarina (20). In relative terms, Paraná performed particularly well: in 2013, there were 11 programs in the state with scores of 6 and 7, and this year there are 20.
There are 10 states where no programs achieved the two highest scores, most of which are in North and Midwest Brazil—but the list also includes Espírito Santo, which is in the Southeast. Luiz Eduardo Bovolato, dean of the Federal University of Tocantins (UFT), traveled to Brasília to talk to the CAPES president about the performance of universities in the North—22 UFT programs were evaluated, seven of which received a score of 4, with the rest scoring 3. “We are looking for more sensitivity from the evaluation committee regarding our local circumstances, and asking CAPES to broaden their perspective on universities in the North,” said Bovolato, according to the UFT website.
The University of São Paulo (USP) excelled in several indicators. It had 265 programs evaluated, almost twice as many as São Paulo State University (UNESP), in second place with 135. The Federal University of Rio de Janeiro (UFRJ) is in third, with 116 (see graph on page 35). USP alone accounts for 18% of the programs achieving scores of 6 and 7—there are 83 in total. There is a second grouping, composed of UFRJ with 39 programs, the Federal University of Rio Grande do Sul (UFRGS) with 36, and the University of Campinas (UNICAMP) with 32.
The quadrennial evaluation has a significant impact on the academic community, because it is used to measure the importance of graduate programs and their associated research groups, and to guide the distribution of scholarships and grants. Programs scoring 6 and 7 are given more autonomy and receive funding directly from CAPES, while funding for those with grades of 5 or below is allocated by the directors of each university.
For all these reasons, it is only natural that universities object when they are not satisfied with the results. UNICAMP’s performance remained similar to 2013, with 70% of courses scoring 5 to 7, but there was an unpleasant surprise: the number of programs with a score of 7 dropped from 16 to 14. “We were harshly evaluated,” says economist André Tosi Furtado, dean of graduate studies and a professor at UNICAMP’s Institute of Geosciences. “UNICAMP is a standout institution in terms of Brazilian graduate studies. We are the second-best university in the country according to several rankings and indicators, but in number of programs scoring 7, we are in fourth place this year”, says Furtado, who is appealing against some of the results. The mechanical engineering program, whose score fell from 7 to 5, was cause for particular confusion. Furtado notes that the evaluation committees made recommendations on how to increase the scores of some programs. “But in the final evaluation stage, the score remained unchanged.”
Rita Barradas Barata, CAPES evaluation director, explains that the criteria are not static. “Scores are defined based on known criteria, but the weights assigned to each criterion can be modified by the coordinators at the end of the evaluation, to reflect the status of each set of programs. We are not comparing programs with their own performances from four years ago, but with each other in 2017,” she says. She cites the example of scientific output by students. “With the advance of publishing culture, students are publishing more and more. If the criterion specifies a minimum of two articles per student in a given field, but most students published four articles, the weight assigned to this criterion can be adapted to reflect the new status quo,” she says.
Their dissatisfaction aside, UNICAMP’s results were very positive, with programs in dentistry and botany increasing to scores of 7, as well as several programs improving from 5 to 6. “The CAPES evaluation has been important in guiding the evolution of the system and standardizing criteria on teaching qualifications and scientific output,” says Furtado.
CAPES has supported and monitored Brazilian graduate studies since 1976, and for nearly 20 years has been following an evaluation model where those running graduate programs periodically complete a questionnaire on various topics: the program curriculum, teaching staff qualifications, student profiles and intellectual output, as well as the international position held by the courses and their influence on other programs (see graphic on page 31). This data is first analyzed by committees of experts from 49 fields of knowledge, who are responsible for determining the results and recommending scores. Next, the CAPES Higher Education Technical and Scientific Council, composed of coordinators and assistant coordinators from the major fields of academic and professional programs, reevaluates the results and decides on the final scores.
These steps occurred between August and September, and lasted six weeks. “We assessed academic programs in the first four weeks, and professional master’s courses and those offered by institutional networks in the last two,” explains Rita Barradas Barata. The study involved 1,550 members of the scientific community. Between 1998 and 2013, the evaluations were triennial. As the graduate system grew, the period was extended and the survey is now conducted on a quadrennial basis.
The evaluation performed by CAPES is practically unparalleled worldwide. “In the United States, scientific associations assess and accredit programs, but in a decentralized manner. In other countries in Latin America, research councils evaluate information provided by the programs and decide whether or not they are permitted to continue,” says the CAPES evaluation director. The UK university evaluation system, which takes place every five years, is similar in terms of the size of the task, however, the UK approach involves analyzing research quality as well as teaching, with the distribution of resources to institutions in the following five years determined based on the analysis of indicators and largely on the results of peer reviews (see Pesquisa FAPESP, issue No. 156).
The regular nature of the CAPES evaluation has helped shape the Brazilian graduate system. UNESP celebrated an increased number of programs scoring 7—they previously had three and now have six—while the number scoring 6 rose from 15 to 21. The growth was the result of a new policy aimed at monitoring programs, requiring annual reports, with special attention given to programs scoring 3. It is no coincidence that no UNESP programs were recommended for de-accreditation. Good quality programs have also been given incentives. “This was not a trivial effort during a national recession, at a time when finding new researchers was difficult,” says geographer João Lima Sant’Anna Neto, dean of graduate studies at UNESP and professor at the Presidente Prudente School of Science and Technology.
Accreditation of a new graduate course by CAPES depends on a process similar to the evaluation: the curriculum for each program is analyzed by a committee of experts in the field, which recommends a score. If it is equal to or higher than 3, the committee sends its opinion to the Higher Education Technical and Scientific Council, which makes the final decision. The quadrennial evaluation does not give scores to newly created courses. There are no shortcuts to reaching the highest scores—the process, in general, is slow and cumulative.
The graduate system at the Federal University of ABC (UFABC), an institution created just 11 years ago, has been gradually evolving. Of the 22 programs evaluated by CAPES, six received an improved score, one saw its score lowered, and the other 15 remained at the same level. Three earned a score of 5: nanosciences and advanced materials, chemical science and technology, and physics. In the previous evaluation, only physics scored 5. According to neuroscientist Alexandre Kihara, dean of graduate studies at the university, the evaluation favors well-established institutions. “Higher scores are partly related to the influence and connections a program has, whether it works in conjunction with newer programs, and whether an alumnus becomes a professor at other universities. Newer institutions face more difficulties in these areas,” he says. Kihara believes that UFABC’s performance in university rankings and in certain indicators, such as internationalization and research impact, is better than asserted by the CAPES evaluation.
Diverse criteria
Rogério Mugnaini, a professor at the USP School of Communication and Arts (ECA), studied the criteria used from 1998 to 2012 by Qualis Periodicals, a system CAPES uses to evaluate the scientific output of graduate programs in terms of articles published in journals. In a study presented in 2015, he showed that program output quality is measured in varying ways depending on the field of knowledge, and highlighted vulnerabilities in the model. In fields with a tradition of publishing results in indexed foreign journals, such as physics and chemistry, article quality is based on citation indicators, especially the Impact Factor of the journal. “But there is a wealth of scientific output published elsewhere that is not being evaluated.” In fields such as geoscience and nutrition, for which there are a limited number of indexed journals on international online databases—such as Web of Science and Scopus—the citation indices do not allow output to be distinguished according to its impact. In these cases, associations between journals and online databases are accepted as a criterion, on the assumption that the journal underwent some form of assessment to establish the link. “But such databases need to be audited to check that they actually apply their quality criteria. There are no guarantees that they will be free of bias,” he says. There are also fields of knowledge, such as the arts and architecture and urbanism, for which journal quality is measured by criteria such as diversity of origin among the authors or members of the editorial board, since these journals are rarely linked to established publications or databases. “It is an important criterion for preventing endogeny—this type of criterion is commonly required by databases—but it is not enough to guarantee the quality of articles published by graduate programs.” In lectures at science events and at CAPES itself, Mugnaini suggested measuring the output of programs from a broader perspective, using indicators that combine several databases. “Such a method would provide a more reliable view of Brazilian scientific output.”
One of the most controversial aspects of the quadrennial evaluation is the autonomy each committee has in assessing the quality of programs in its field. “There are committees that deduct points from the program if its master’s degree or doctorate exceeds 24 months or 48 months, respectively, while others have more tolerance,” says João Lima, from UNESP. “I understand that CAPES uses these maximum terms as a reference for scholarship durations, but why penalize a program for exceeding them? It has no impact on program quality.” Lima, who was CAPES geography coordinator from 2008 to 2014, believes that the evaluation in general assumes a very technical perspective. “There is a process to be followed that can raise a program’s score over time by improving its more quantitative aspects, but there are cases where excellent programs have refused to follow this process, preferring to take a more humanistic approach, and they suffer for it,” explains Lima, referring to established programs at traditional universities.
Physician Carlos Gilberto Carlotti Junior, dean of graduate studies at USP and a professor at USP’s Ribeirão Preto School of Medicine (FMRP-USP), believes the evaluation system needs to evolve to include more elements of academic excellence. “It is important to know not only how much was produced, but also what impact the knowledge generated has on Brazilian science, the formulation of public policy, and the country’s development,” he suggests. “The end product of a graduate program is not the dissertation or the thesis, but the graduate student. Today, there are several elements of the student that are not measured. The quality of their thesis is not even evaluated.”
Despite having many excellent programs, USP also faced some problems in the evaluation. Its surgical practice doctorate, for example, received a score of 2 and will be discontinued. Another six programs received a score of 3 for their master’s degrees and 2 for their doctorates, meaning the doctorates will lose their accreditation. “We are going to hold a meeting for each program and think about what to do next,” says Carlotti.
Rita Barradas Barata, from CAPES, agrees that the evaluation criteria need to evolve. Among the changes being considered for the next evaluation in four years’ time is a modification of the scoring system, introducing half scores such as 6.5, instead of full scores only. “This would show when a program has changed in some way from one evaluation to the next, even when it has just followed the movement of other programs,” she says. But the changes could be bigger. “The key is to decrease the emphasis on standardization and increase the value placed on quality. We need to promote relevant criteria, value flexibility in graduate courses, and be able to recognize different forms of organization”, she affirmed.
Republish