Imprimir Republish

COMPUTING

The challenges of regulating artificial intelligence

Brazil, Canada, and Europe are working on legislation to reduce the risks of improper use of AI programs and applications

Alexandre Affonso via Midjourney

In recent months, government representatives of the 27 European Union countries, Canada, and Brazil have been working intensively to draft guidelines for the safe use of programs and applications using artificial intelligence (AI). In June, the European Parliament approved the final version of a bill known as the AI Act. If approved by member countries, which could happen before the end of 2023, this may be the world’s first piece of AI legislation. In Brazil at least four bills aimed at creating rules for the development, implementation, and use of AI systems are due to be discussed in the National Congress before year-end.

The task of setting rules to control the use of this type of program is complex: AI has been incorporated into science, the financial system, security, education, advertising, and entertainment, most of the time without the user realizing it. Regulation needs to strike a balance between reducing the risks of improper use, preventing discrimination against minority groups, and ensuring privacy and transparency for users. It should also preserve spaces for innovation, according to specialists interviewed for this feature. It is not possible to foresee all the risks that the use of these technologies may bring.

“Remaining in a state of uncertainty around regulation may have adverse effects on citizens,” says lawyer Cristina Godoy, of the University of São Paulo’s Ribeirão Preto School of Law (FDRP-USP). She authored an article, published in October 2022 in Revista USP, on the challenges of AI regulation in the country. In late September, at a conference in Belo Horizonte (Minas Gerais State), she is set to present the initial outcomes of research into the use of facial recognition — a type of AI — for the concession of bank loans.

In the study, conducted by the USP Artificial Intelligence Center (C4AI) and supported by IBM and FAPESP, 90% of those filing 2,300 lawsuits with the São Paulo State Courts of Justice (TJ-SP) do not recognize loans approved via facial biometrics on bank apps. “People allege that no document was signed, and that they didn’t know they were contracting the service,” reports the researcher. The data are held by the Brazilian Artificial Intelligence Observatory, a portal developed in liaison with the Ponto BR Network Information Center (NIC.Br), to be launched this year.

The TJ-SP generally rules in favor of banks, considering that facial biometrics is a secure substitute for a client’s signature. Godoy disagrees: “This technology still carries a high rate of error.” She goes on to highlight another issue: that little is known about how these systems operate. “There is no clarity around what company is contracted to provide this service, how it was developed, or what criteria are applied to certify if it is that person or not. Without this information it is difficult for a citizen to challenge the banks.”

Godoy’s group also examined facial recognition systems used to identify fraud in obtaining student or elderly-person discounts on public transport across 30 Brazilian cities, covering more than one million inhabitants. In the majority (60%), the level of transparency was considered very low, as municipalities did not show how the collation and treatment of information on bus and train users were conducted, nor what parameters are used to detect fraud. The results were published in November 2022 in the annals of the 11th Brazilian Conference on Intelligent Systems held in Campinas, inland São Paulo State.

Godoy calls for more transparency in AI programs; she believes, however, that it is not enough to inform whether applications are using the tools; there is also a need to explain how they work, how they process data, and how they make decisions. This information would help to prevent discrimination against vulnerable groups.

For example, researchers at the Federal University of Rio Grande do Norte (UFRN) analyzed data from the Security Observatories Network, which monitors public security data across eight Brazilian states. They found that 90% of the 151 people detained in the country in 2019 based on facial recognition cameras were Black, as detailed in a study published in July 2020 in the journal Novos Olhares.

“When trained using past and present databases, artificial intelligence programs may often reproduce or extend patterns of discrimination,” ponders Bruno Ricardo Bioni, director of the Data Privacy Brasil Research Association, and member of the Brazilian National Council for Protection of Personal Data and Privacy, a consultative arm of the National Data Protection Authority (ANPD).

Bioni was part of the commission of digital and civil law specialists convened by the Federal Senate in March 2022 to analyze projects on AI regulation. One of them, Bill 21/20, received much criticism for being highly generic. After nine months of seminars and public hearings, the team of legal specialists presented a 900-page report with concepts and suggestions for principles to be followed. Some twenty pages formed the basis for another proposed law, 2338/23, tabled in May by Senator Rodrigo Pacheco (PSD-MG), Chair of the Senate. In July, Senator Jorge Kajuru (PSB-GO) asked the Senate for similar bills, such as 5691/19, 21/20, and 2338/23, to be dealt with jointly. “We expect that Parliament will be able to evaluate these bills jointly in this second semester, using 2338/23 as a basis, to approve regulation for the country,” says Bioni.

“It is not possible to conceive a silver-bullet regulatory proposal that levels the governance across all sectors,” observes the lawyer. According to Bioni, the solution is to create regulation that classifies AI systems by risk level, the approach used in the European legislation, around which bill 2338/23 was structured.

The European Union AI Act proposes that AI systems be transparent, trackable, secure, nondiscriminatory, and that they respect the privacy of citizens, although the means by which to achieve these objectives are not yet clear. The programs will also need to be supervised by human specialists to prevent important decisions being made wholly by a machine. Applications will be classified into one of four risk categories: unacceptable, and therefore subject to prohibition; high; limited, enabling more flexible rules; and minimal, with no additional legal obligations beyond existing legislation (see box). Programs for driverless cars, for example, are in the high-risk category.

Unacceptable risk applications include AI programs classifying people based on behavior or predictive policing to prevent crimes. This is the case of algorithms such as Compas (Correctional Offender Management Profiling for Alternative Sanctions), used in the USA to prevent reoffending. These programs have a prejudicial bias as they indicate more Black people than white as crime suspects. Their transparency levels are low, as the application is provided by a private company, and its code is not open source.

So-called generative AIs, which learn to produce new text based on analysis of patterns used by people to connect words, are now the subject of a new section in the bill after the repercussions of ChatGPT, launched in November 2022. They will be required to adopt transparency measures, making it clear that their content was generated by an intelligent computer system, and will be programmed to prevent the creation of harmful or illegal content, such as providing instructions to manufacture a bomb. Programs of limited risk, such as those that create synthetic images and content, will also need to comply with transparency requirements.

In Brazil, Bill 2338/23 follows a similar logic, with two risk levels: excessive, whose applications will be prohibited; and those of high risk, which will require evaluation and monitoring before and during their use. The former covers algorithms exploiting social vulnerabilities or promoting the classification of people by the Public Branch for access to public products and services such as social and other benefits. High-risk applications are software programs that can make decisions in areas such as education, filtering access by learning and professional institutions, classifying candidates for job vacancies, healthcare, carrying out medical diagnoses, social security, and granting benefits, among others. To prevent the classification system from being set in stone, and to enable it to exist in a dynamic technological environment, a future oversight authority, provided for in the bill, will be able to reassess the risk of any given application.

“Bill 2338/23 is more comprehensive, detailing the categorization of risks and drawing from legislative trends on the theme, particularly those in the European Union; in terms of risk levels, it constitutes an opportune level of detail,” says lawyer Antonio Carlos Morato, of the USP School of Law.

“The European bill, with its differentiated levels of impact, is an interesting model by which to form a basis for regulatory discussion in Brazil, but it needs to be borne in mind that the reality of those countries is very different to ours,” observes computer scientist Virgílio Almeida of the Federal University of Minas Gerais (UFMG) and coordinator of the Innovation Center on Artificial Intelligence for Health (CIIA-Saúde), one of the Engineering Research Centers (CPE) funded by FAPESP. “In Brazil we have significant social inequality, and we need to think about public policies that evaluate and incentivize automated technologies without replacing less qualified workers, but rather which promote their development.”

In a February article published in the scientific journal IEEE Internet Computing, Almeida and his coauthors propose a governance model known as coregulation. The government would set public directives and policies, with companies responsible for creating and following their own internal governance mechanisms. “Artificial intelligence technologies change very quickly, and it is difficult to cover all of these transformations with just one law,” he adds.

Fabio Gagliardi Cozman, coordinator of USP’s Artificial Intelligence Center, warns of the risk around creating rules that inhibit entrepreneurship: “Very restrictive regulation may hamper local innovation, leading to a need for technologies to be imported,” he observes.

Also concerned about the impacts of regulation on Brazilian industry, political scientist Fernando Filgueiras of the Federal University of Goiás (UFG), states: “The legislation needs to be associated to mechanisms to incentivize research and industry.” Filgueiras believes that without investments in Brazilian research and industry, large international corporations may be better structured to deal with possible sanctions, while small and medium-sized Brazilian companies with fewer resources may be left behind. The “Brazilian artificial intelligence strategy,” which deals with ethical issues for government advancement in the area, published in August 2021 by the Brazilian Ministry of Science, Technology, and Innovation (MCTI), could, in his opinion, be a supplementary document to the rules to be formulated. However, as he observed in February 2023 in the journal Discover Artificial Intelligence, the government document is generic and does not make clear how the government will act in this area, and how it intends to support research across universities and corporations. The MCTI did not respond to Pesquisa FAPESP’s requests for a statement.

Cristina Godoy, of USP Ribeirão Preto, observes that the European Union proposal includes an assessment of the impact of AI on small and medium-sized companies if the regulatory measures are approved, something not covered in the Brazilian document. “With this information in hand, governments can calculate how much they need to invest to support innovation,” she states.

The AI Act provides for so-called sandbox or regulation test environments, by which start-ups can put their creations to the test without being subject to sanctions or fines. In Brazil, Bill no. 2338/23, tabled in the Senate, provides on similar experimental regulatory environments, overseen by a competent authority to be defined. “These spaces will not function satisfactorily without a strategic outlook on the budget and priority areas to be supported,” warns Godoy. She says that having an idea of which AI areas are more relevant for investment may render this process more efficient for companies to organize their participation in fundraising and, consequently, test their products in sandbox environments. She draws attention to this national strategy gap in her Revista USP article.

Another regulatory challenge will be the oversight of public and private institutions. The Europeans look set to create a specific agency with this aim. In Brazil, the bill provides for constitution of a regulatory authority which, in principle, should oversee all areas to which AI may be applied, and will likely be nominated by the Executive Branch. Specialists interviewed for this report envisage moves within the Brazilian National Data Protection Authority (ANPD), which recently became an autarchy, to absorb this function.

Godoy considers this to be a risky move. The reach of AI applications, deployed across different sectors of the economy, would make the task complicated if concentrated on one regulator only. “It will be difficult for them to bring together all the necessary expertise, ranging from health to education,” she says.

Dora Kaufman, a specialist in the ethical and social impacts of AI at the Pontifical Catholic University of São Paulo (PUC-SP), says that the constitution of a national regulatory agency may be infeasible, and suggests that sector agencies may take on this mission: the banking sector could be overseen by the Brazilian Central Bank (BACEN) and the Brazilian Health Regulatory Agency (ANVISA) could take care of the health area.

“There is no federal regulation in the US, nor does this look to be imminent,” she comments. “Authority and responsibility for AI regulation and governance are distributed among federal agencies.” The two documents that govern this area — the “Blueprint for an AI bill of rights,” from the White House, and “NIST risk management framework,” of the US National Institute of Standards and Technology — are voluntarily oriented, without the force of the law.

Computer scientist André Carlos Ponce de Leon Carvalho, of the Institute of Mathematical and Computer Sciences at the University of São Paulo (ICMC-USP) in São Carlos, raises another question: “Would national regulation deal with the issues or will countries need international agreements, such as the one for nuclear energy?”

Specialists warn that any regulation will need a maturity period for parliamentarians to better familiarize themselves with the subject, and for other sectors of society to start participating in debates. “Premature regulation may restrict innovation and not protect society,” highlights Kaufman. “The process is as important as the final result.” Among examples he cites the Brazilian Civil Rights Framework for the Internet, approved in 2014 after open discussions that commenced in 2009, and the European AI regulation process, whose public consultation began in April 2021.

Levels of AI risk in Europe and Brazil

EUROPEAN COMMUNITY
Artificial Intelligence Act (AI Act)

Unacceptable (prohibited)
— Algorithms that award social scores or classify citizens based on behavior or predictive policing

High risk: To be assessed before and after commercialization
— Systems for infrastructure management (transportation, education) and border control or legal assistance

Limited risk: Must apply minimal transparency measures
— Programs that produce deepfakes and chatbots

Minimal risk: No additional legal obligations beyond existing legislation

Brazil
Bill 2338/2023

Prohibited
— Social classification programs or those capable of manipulating the behavior of vulnerable population groups

High risk: Must be assessed and monitored before and during use
— Programs for automatic classification of students, job candidates, credit applications or social security benefits, medical diagnoses, crime risks and criminal conduct, and driverless vehicles

Sources European Parliament and Bill 2330/2023, Brazilian Senate

Proposals under analysis in Brazil

The regulation project does not mention general-use generative AI — as the legislators did their work prior to the release of ChatGPT — nor AI used to create deepfakes. For this reason, it will probably undergo alterations.

Applications that generate hyper-realistic synthetic content, such as video or audio deepfakes, are of increasing concern due to their capacity to generate disinformation through false content, with the potential to adversely affect democratic election processes.

A recent commercial by a vehicle brand included a deepfake image of Elis Regina, who died in 1982, singing with her daughter Maria Rita. In July, the Brazilian National Advertising Self-Regulation Council (CONAR) opened an enquiry into rights to use Elis’s image. The controversy has extended to the National Congress: at least two bills (3592/23, in the Senate, and 3614/23, in the Chamber of Deputies) propose guidelines for the use of images and audios of people who have passed away via AI systems.

Lawyer Antonio Carlos Morato, of the USP School of Law, who conducts research into copyright and artificial intelligence, does not see the need for specific laws for this type of use: “There is no doubt that unauthorized uses can be avoided, as we already have the Federal Constitution, the Civil Code, and Copyright Law. Bill 3614/23, for example, is only intended to detail what already exists in the current Civil Code text.”

For Morato, authorization from Elis Regina’s children for the commercial with the singer was valid, since personality rights (which include image and voice) are already protected by the Federal Constitution and Civil Code, with the possibility of defense by relatives up to fourth degree after death.

Projects
1.
Artificial Intelligence Center (nº 19/07665-4); Grant Mechanism Engineering Research Centers (CPE); Principal Investigator Fabio Gagliardi Cozman (USP); Investment R$7,102,736.09.
2. Regulation and funding of artificial intelligence in Brazil (nº 22/12747-2); Grant Mechanism Fellowship in Brazil; Principal Investigator Cristina Godoy Bernardo de Oliveira (USP); Investment R$44,287.20.
3. Center for Innovation in Artificial Intelligence for Medicine (CIIA-Saúde) (nº 20/09866-4); Grant Mechanism Engineering Research Centers (CPE); Principal Investigator Virgílio Augusto Fernandes Almeida (UFMG); Investment R$1,683,839.04.

Scientific articles
ALMEIDA, V. et al. On the development of AI governance frameworks. IEEE Internet Computing. vol. 27, no. 1, pp. 70–4. jan. 2023.
BRANDÃO, R. et al. Artificial intelligence, algorithmic transparency and public policies: The case of facial recognition technologies in the public transportation system of large Brazilian municipalities. Bracis 2022. Lecture notes in computer science. Intelligent Systems. vol. 13653. nov. 2022.
FILGUEIRAS, F. & JUNQUILHO, T. A. The Brazilian (non)perspective on national strategy for artificial intelligence. Discover Artificial Intelligence. No. 3, vol. 7. feb. 2023.
GODOY, C. B. O. Desafios da regulação do digital e da inteligência artificial no Brasil. Revista USP. No. 135. oct. 2022.
MAGNO, M. E. da S. P. & BEZERRA, J. S. Vigilância negra: O dispositivo de reconhecimento facial e a disciplinaridade dos corpos. Novos Olhares. vol. 9, no. 2, pp. 45–52. july 2020.
VIEIRA, L. M. A problemática da inteligência artificial e dos vieses algorítmicos: Caso Compas. 2019 Brazilian Technology Symposium.

Republish