{"id":543732,"date":"2025-03-20T18:49:31","date_gmt":"2025-03-20T21:49:31","guid":{"rendered":"https:\/\/revistapesquisa.fapesp.br\/?p=543732"},"modified":"2025-03-20T19:47:52","modified_gmt":"2025-03-20T22:47:52","slug":"brazilian-universities-discuss-how-to-regulate-the-use-of-artificial-intelligence","status":"publish","type":"post","link":"https:\/\/revistapesquisa.fapesp.br\/en\/brazilian-universities-discuss-how-to-regulate-the-use-of-artificial-intelligence\/","title":{"rendered":"Brazilian universities discuss how to regulate the use of artificial intelligence"},"content":{"rendered":"<p>Scientific and higher education institutions in Brazil have started to draft recommendations for the use of artificial intelligence (AI), particularly generative, in teaching, research, and extension. The popularization of software such as ChatGPT, capable of generating text, images, and data, have raised doubts over ethical limits in the use of these technologies, primarily in academic writing. Professors and teachers have sought new ways to assess the work of students in an attempt to circumvent the risks of undue AI use. Generally speaking, the guidance asks that its use be transparent, while warning of the dangers of infringing copyright, practicing plagiarism, generating disinformation, and replicating discriminatory biases that these tools may reproduce.<\/p>\n<p>In February, the Brazilian Industrial Education Service (SENAI) CIMATEC University Center, in Bahia State, published a set of guidelines on generative AI for its academic community, based on three principles: transparency; \u201chuman-centrality,\u201d i.e. preserving human control of AI-generated information, given that it should be used to the benefit of society; and attention to data privacy, chiefly in activities involving contracts with companies through partnerships for the development and transfer of technologies. Any information shared on AI platforms can be stored by the tool, breaching data confidentiality. \u201cWe mustn\u2019t forget that if we learn from these tools, they can also learn from us,\u201d points out civil engineer Tatiana Ferraz, Associate Dean (administrative-financial) of SENAI CIMATEC, and coordinator of the guide. The institution\u2019s disciplinary regulations have been updated to include sanctions for students breaking the rules.<\/p>\n<p>The guide authorizes teaching staff to use plagiary detection software as they deem necessary, although these tools are not 100% accurate for indicating AI-produced content. The tools cannot be cited as coauthors of academic papers, but their application in assisting research processes and fine-tuning academic writing is permitted. To this end, all commands used\u2014the questions and instructions input into the tool, also known as prompts\u2014and the original information generated by AI must be described in the working methodology, and attached as supplementary material.<\/p>\n<p>Other Brazilian institutions are following this lead. \u201cThe university should not prohibit, but rather create guidelines for responsible use of these tools, observes computer scientist Virg\u00edlio Almeida, coordinator of a commission at the Federal University of Minas Gerais (UFMG), which proposed recommendations for AI-technology use at the institution, presented to the academic community in May. The suggestions will serve as a basis for creation of an institutional policy with rules and standards, and a permanent governance committee.<\/p>\n<p>They cover areas of education, research, extension, and administration, and include transparency principles for the use of these tools, such as attention to data protection and privacy, disinformation, and discriminatory biases that these technologies may reproduce. \u201cOne of the proposals is that the university invests in AI literacy courses for teaching staff, researchers, employees, and students,\u201d explains Almeida. In education, one of the recommendations is that UFMG graduate and postgraduate course syllabuses should inform about what is actually permitted using these technologies. In research, the emphasis is on transparency: details must be given on how AI was used in the scientific process, and what biases may have been brought into it. Careful analysis of AI-generated results is another recommendation for avoiding false data.<\/p>\n<\/div><div class='overflow-responsive-img' style='text-align:center'><picture data-tablet=\"\/wp-content\/uploads\/2025\/02\/RPF-ia-2023-07-info-ING-DESK.jpg\" data-tablet_size=\"1140x450\" alt=\"\">\n    <source srcset=\"\/wp-content\/uploads\/2025\/02\/RPF-ia-2023-07-info-ING-DESK.jpg\" media=\"(min-width: 1920px)\" \/>\n    <source srcset=\"\/wp-content\/uploads\/2025\/02\/RPF-ia-2023-07-info-ING-DESK.jpg\" media=\"(min-width: 1140px)\" \/>\n    <img decoding=\"async\" class=\"responsive-img\" src=\"\/wp-content\/uploads\/2025\/02\/RPF-ia-2023-07-info-ING-MOBILE.jpg\" \/>\n  <\/picture><span class=\"embed media-credits-inline\">Alexandre Affonso \/ Revista Pesquisa FAPESP<\/span><\/div><div class=\"post-content sequence\">\n<p>The University of S\u00e3o Paulo (USP) published a dossier in <em>Revista USP<\/em> (USP Magazine) in May on artificial intelligence in scientific research, and has held meetings and debates on the matter. \u201cThe overarching suggestion is to study how to incorporate this use into education and research, and examine the ethical limitations and the potential,\u201d observes lawyer Cristina Godoy, of USP\u2019s Ribeir\u00e3o Preto Law School (FDRP-USP), a member of a group of researchers that drew up proposals for the university, such as the need to create guidelines for graduate and postgraduate students, with warnings about preserving the privacy of sensitive or original research data, such as in theses and dissertations, and the need to look for new ways to assess written work in the classroom setting. The recommendations were made during an event in March 2023, and are being analyzed by a working group.<\/p>\n<p>The use of AI in research is nothing new. Machine learning and natural-language processing tools have long been in use to analyze patterns among large volumes of data, but with the advance of platforms using generative AI, many universities in the United States and Europe have created instructions on the subject, which have served as inspiration for Brazilian institutions. At SENAI CIMATEC, guidelines from the universities of Utah, USA, from July 2023, and Toronto, Canada \u2014 still in the preliminary stages \u2014 have oriented the working group, along with the quick guide <em>ChatGPT and Artificial Intelligence in Higher Education <\/em>of the United Nations Educational, Scientific and Cultural Organization (UNESCO), published in April 2023.<\/p>\n<p>In this context, there are cases of professors who have asked their students to make oral presentations, and work on papers within the classroom, or even hand-write them. Godoy, of USP, who used to request written work on articles studied, came to ask students to present schematics with mental maps showing the connections between the texts studied in class.<\/p>\n<p>In 2024, she developed a research activity within the Artificial Intelligence Center (C4AI) at USP, backed by IBM and FAPESP, with scientific initiation, masters, and PhD students in computing, political science, and law, in which they were required to develop prompts for ChatGPT to analyze feelings \u2014 a technique that classifies opinions as positive, negative, or neutral\u2014among users of X (formerly Twitter) on AI. The data will be presented in an article currently being written. One aspect of the work will be to detail the methodology and prompts used. As it was not necessary to create a specific algorithm for this task, the tool accelerated the data analysis process; according to Godoy, if they needed to do everything from scratch, the work would take eight months, but it took ChatGPT two.<\/p>\n<p>\u201cSome Judicial Branch institutions are studying the use of AI to optimize case analysis stages. Thus, not allowing its use is not advantageous for students, who need to be critically and responsibly prepared,\u201d says the lawyer. In her perception, students performing better in essays and exams are those that develop the best prompts. \u201cIn order to ask the AI platform good questions and achieve the desired outcome, the problem to be addressed needs to be identified clearly,\u201d says Godoy.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" width=\"1140\" height=\"687\" class=\"size-full wp-image-547493 aligncenter\" src=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2025\/03\/RPF-ia-2023-07-ING-IMG2-1140.jpg\" alt=\"\" srcset=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2025\/03\/RPF-ia-2023-07-ING-IMG2-1140.jpg 1140w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2025\/03\/RPF-ia-2023-07-ING-IMG2-1140-250x151.jpg 250w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2025\/03\/RPF-ia-2023-07-ING-IMG2-1140-700x422.jpg 700w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2025\/03\/RPF-ia-2023-07-ING-IMG2-1140-120x72.jpg 120w\" sizes=\"auto, (max-width: 1140px) 100vw, 1140px\" \/><span class=\"media-credits-inline\">L\u00e9o Ramos Chaves\u2009\/\u2009Revista Pesquisa FAPESP<\/span><\/p>\n<p>In the first semester of 2024, computer scientist Rodolfo Azevedo, of the University of Campinas (UNICAMP), delivered a pilot discipline introducing code-assisted programming in collaboration with UNICAMP colleague and electronics engineer Jacques Wainer. The students, all from the food engineering course, learned to program for the first time with ChatGPT as an assistant in the classroom, using Python language. \u201cThe aim was to teach programming concepts, with focus on developing the ability to break down a problem into smaller steps and resolving them. The means are always undergoing transformation, since the first punch cards,\u201d says Azevedo.<\/p>\n<p>In his opinion, using generative AI prompts students to think about more complex problems. \u201cIf we had to program from scratch before, which took more time, now AI writes code based on the instructions provided by the student, who can do a more in-depth analysis into errors \u2014 which always occur \u2014 and propose more elaborate solutions and improvements,\u201d says the computer scientist, who stresses that the technology is already used by programmers from multiple corporations to optimize their processes. He also envisages other uses in the academic field. \u201cThese tools can reduce inequality for non-native English-speaking researchers, as they can ask AI to improve translations of their articles into that language,\u201d he says.<\/p>\n<p>Business administrator Ricardo Limongi, of the Federal University of Goi\u00e1s (UFG), strives to teach students how to use these tools with a critical outlook. \u201cI explained that I use them, and that they can also use them. That\u2019s not cheating,\u201d he observes. In the classroom, he opens AI platforms and shows the students how to compose a prompt, for example. In statistics, one of the disciplines he teaches, Limongi uses generative AI to help explain concepts that the students have difficulty in understanding, always overseeing the responses. \u201cYou get great analogies,\u201d he says.<\/p>\n<p>Author of an article on application of AI in scientific research, published in <em>Future Studies Research Journal <\/em>in April 2024, Limongi has been invited to lecture on the theme at universities, where he gives workshops presenting generative AI tools he has used with his students to optimize research processes.<\/p>\n<p><strong>Scientific journals<br \/>\n<\/strong>In addition to universities, the world\u2019s most prominent scientific publishing houses have published rules on generative AI. In a September 2023 survey of more than 1,600 scientists in <em>Nature<\/em>, almost 30% of them stated they had used this technology to help in drafting manuscripts, with some 15% employing it to draw up funding requests. A study published in the <em>British Medical Journal (BMJ)<\/em> in January 2024 indicated that, among the 100 most prominent scientific journal publishers, 24% provided guidance on the use of AI, and of the 100 highest-ranked periodicals, 87% set out rules. Respectively, 96% and 98% of publishing houses and periodicals having guidelines prohibited the inclusion of AI as an author of articles.<\/p>\n<p>The <em>Springer Nature<\/em> group constantly updates its rules and stipulates that generative AI use must be documented in the methods section of the manuscript. AI-generated images and videos are prohibited. Text reviewers may not use generative AI software when evaluating scientific articles, as they contain original research information. Publishing house <em>Elsevier<\/em> allows AI to be used to improve the language and legibility of texts, \u201cbut not to substitute essential author tasks, such as producing scientific, pedagogical, or medical insights, making scientific conclusions, or providing clinical recommendations.\u201d<\/p>\n<p>In Brazil, the Scientific Electronic Library Online (SciELO) published a guide on AI tool and resource usage in September 2023. \u201cIn line with international publishers, one of the key points is that AI may not be considered as the author of a paper,\u201d explains SciELO coordinator Abel Packer. The manual requires that authors declare when they use these tools \u2014 anyone hiding this is committing a serious ethical offense. However, the document encourages application of the technology to preparation, writing, review, and translation of articles. \u201cIn our view, five years from now, scientific communication will change completely, and the use of these tools will be omnipresent,\u201d says Packer, who believes that the tools will soon take on an auxiliary role in assessing and revising manuscripts submitted to journals.<\/p>\n<p class=\"bibliografia separador-bibliografia\">The story above was published with the title &#8220;<strong>Guidance on its way<\/strong>&#8221; in issue 342 of august\/2024.<\/p>\n<p class=\"bibliografia\"><strong>Scientific articles<br \/>\n<\/strong>LIMONGI, R. <a href=\"https:\/\/doi.org\/10.24023\/FutureJournal\/2175-5825\/2024.v16i1.845\" target=\"_blank\" rel=\"noopener\">The use of artificial intelligence in scientific research with integrity and ethics<\/a>. <strong>Future Studies Research Journal: Trends and Strategies<\/strong>. Vol. 1, no. 16. Apr. 2024.<strong><br \/>\n<\/strong>GANJAVI, C. <em>et al<\/em>. <a href=\"https:\/\/www.bmj.com\/content\/384\/bmj-2023-077192\" target=\"_blank\" rel=\"noopener\">Publishers\u2019 and journals\u2019 instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis<\/a>. <strong>BMJ<\/strong>, Vol. 384. Jan. 31, 2024.<\/p>\n","protected":false},"excerpt":{"rendered":"With both students and researchers unsure, institutions are debating the ethical limits of AI tools for writing and scientific research","protected":false},"author":684,"featured_media":547489,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"categories":[166],"tags":[220,219,2413],"coauthors":[2721],"class_list":["post-543732","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-policies-st-en","tag-communication","tag-computation","tag-technology","position_at_home-sumario"],"acf":[],"_links":{"self":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/543732","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/users\/684"}],"replies":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/comments?post=543732"}],"version-history":[{"count":6,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/543732\/revisions"}],"predecessor-version":[{"id":547522,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/543732\/revisions\/547522"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media\/547489"}],"wp:attachment":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media?parent=543732"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/categories?post=543732"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/tags?post=543732"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/coauthors?post=543732"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}