Until April next year, engineer and computer scientist Virgílio Augusto Fernandes Almeida will be at the Institute for Advanced Studies of the University of São Paulo (IEA-USP) studying the advantages and disadvantages of interactions between people and algorithms that capture and filter online information, select news articles, recommend movies, determine the quality of medical treatment for patients, and generally influence and shape human actions and social organization (see Pesquisa FAPESP issue nº 266).
Almeida is professor emeritus at the Department of Computer Science of the Federal University of Minas Gerais (UFMG) and an associate professor at the Berkman Klein Center for Internet & Society at Harvard University, USA. He has been concerned about algorithms for years—most recently working with colleagues in political science.
Field of expertise
Computer Science
Institution
Federal University of Minas Gerais (UFMG)
Educational background
Bachelor’s degree in electrical engineering, UFMG (1973); master’s degree in computer science, Pontifical Catholic University of Rio de Janeiro (1980); PhD in computer science, Vanderbilt University (1987)
Scientific output
Author of 170 scientific articles and coauthor of six books published in English and translated into Portuguese, Korean, and Russian
As Secretary of Information Technology Policy at the Brazilian Ministry of Science, Technology, and Innovations (MCTI) between 2011 and 2016 and coordinator of the Brazilian Internet Steering Committee (CGI), he helped draft the rules of the digital world in the country and oversaw the creation of the Civil Rights Framework for the Internet in 2014.
He has been writing about these topics in the Pensar pullout of the Estado de Minas newspaper for years and in Valor Econômico together with economist Francisco Gaetani from the Getulio Vargas Foundation since 2019. He often uses excerpts from the works of his favorite authors, such as Argentine writer Jorge Luis Borges (1899–1986), the German writer Thomas Mann (1875–1955), and fellow Minas Gerais natives João Guimarães Rosa (1908–1967) and Carlos Drummond de Andrade (1902–1987).
In a videocall from his home in the mountains in Nova Lima, part of the Belo Horizonte Metropolitan Area, Almeida expressed his concerns and ideas for some of the problems we currently face, such as content moderation and the role of the government, businesses, and internet users. He is married to Rejane Maria, an engineer, and has two children, Pedro, 40, and André, 38. He has one grandson and another on the way.
In May, you published an article with political scientists Fernando Filgueiras of the Federal University of Goiás and Ricardo Mendonça of UFMG on governance of the digital world. What is it like working with people in the humanities?
It’s been great. I’m learning a lot. Multidisciplinary work is not simple. There are obstacles regarding language, the standards used in other fields, and prior knowledge, but you have to find a way to move forward. Together we’re writing a book for the University of Oxford on the impact of algorithms on society’s institutions.
Can the social sciences contribute to the governance of algorithms?
Yes, very much so, because algorithms and the technologies they control have a social impact. How do people react to them? How do they change their behavior as a result? The digital world is a public environment where people often show unfamiliar and uncivilized sides of themselves, especially in comment sections. It is a world that can easily be deceptive. I’ve been writing about it in Estado de Minas for many years, always trying to combine literature and computer technology to help people see other sides of the situation. In one article, I referred to a book by Argentine writer Bioy Casares [1914–1999], in which the narrator falls in love with a beautiful girl. It’s an impossible love though—the girl isn’t real, she was created by a machine.
Another article you wrote, with Danilo Doneda in 2016, also addressed governance by and of algorithms.
That’s right. I met Danilo, an expert in personal data protection, when I was working in Brasília, and we started to discuss governance of and by algorithms. But what does this mean? Digital platforms are governed by algorithms. There are no people working on them. Algorithms govern these platforms by dictating what to highlight based on the data and personal tastes they continuously collect. This is known as algorithmic governance, and it has expanded to other cases. In the financial sector, it is an algorithm that says whether or not a person can receive a loan. In transport apps, it is an algorithm that determines the route and price of the journey. In a world of almost eight billion people and countless problems to overcome, algorithms are essential to controlling airflow and energy distribution. They are essential, but they also make decisions on people’s lives in ways that are not always considered fair. Potential problems with this include discrimination, exclusion, and injustice. The other side is the governance of algorithms, understanding how they work and that some kind of transparency should be required. In the USA, several courts use a program called COMPAS to decide whether or not a defendant is entitled to parole. Neither the government nor the judges know the criteria used by the algorithm. The software’s creators do not share this information because they claim it’s a trade secret, based on US law.
What ways of governing algorithms have been proposed?
The governments of several countries and society at large have discussed this issue. One point on which there is already some consensus is that algorithms need to be fair, transparent, and explainable. But these criteria are really difficult to apply, because an algorithm is made up of complex code and data that constantly changes. Most use machine learning and vary depending on what it is learning. One suggestion in Europe is that algorithms must be explainable to any person who feels wronged by a decision. When Danilo and I wrote the article, we wanted to apply the ideas of internet governance to algorithms, establishing, for example, that they should follow rules predetermined by multisectoral commissions in each country.
Did it work?
No, because the problem remains. Two years ago, a master’s student called Manoel Ribeiro, supervised jointly by me and my colleague Wagner Meira Jr., carried out a study that had a worldwide impact. He researched how people’s opinions are radicalized in YouTube groups of differing political perspectives. The study showed that a person could join a group as a political centrist and become radicalized to the point that they promoted white supremacy [see Pesquisa FAPESP issue nº 287]. But we weren’t able to identify the role of the algorithm. All of this is measured from the outside. We can’t see what’s behind it because companies don’t share how the algorithms work. This is very serious.
We have to stop the inequalities of the physical world from being mirrored online, but that is not what we are seeing
What else has your group studied?
Gabriel Magno, who completed his PhD in 2019, asked: do social and moral values migrate from the physical world to the online world? To answer this question, he analyzed 1.2 billion tweets using a sociological research database called the Word Value Server [WVS] and artificial intelligence [AI] to map tweets from 50 countries. In some cases, viewpoints of the physical and online world coincide, but in others they are different due to access and gender restrictions. In the East, women have much less freedom to express themselves. In Brazil, some values coincide and others don’t. Argentina is very interesting: the issue of abortion from the perspective of a sociologist is different to most progressive movements that appears on the internet. Magno also participated in a study with a former student, Camila Araújo, who created a bot that searched Google and Bing in 42 countries for images of beautiful women and ugly women. She then used AI to estimate the age, race, and ethnicity of the results. We were shocked by the results.
What did they show?
In Nigeria and Kenya, the beauty standard was a young, blonde woman. This is significant because young people compared themselves to this standard to position themselves in the world. To see why this occurred, Camila and Gabriel began investigating where the pictures that represented beautiful and ugly women came from. They found that the images could be grouped by language—English, Spanish, Portuguese, Chinese. In the English group, the images were dominated by the richest countries, such as the USA, Canada, Australia, England. Former colonies in Africa that still speak English receive the same images in search results, but they do not represent their population. To find results compatible with the local demographics, the search parameters had to be local rather than global.
In other words, the way we collect the data affects the research result.
Yes. A recent line of research in the USA and England is dealing with exactly this: data colonialism. It’s a worrying topic because large companies need data from all over the world to train facial recognition algorithms and recommend websites or news articles, but the poorest countries don’t have the technology—they only have the data, which they give away unintentionally without even knowing it. The reference to colonialism is because these big companies are now extracting information from these countries—rather than natural resources—to expand their economic power.
What discussions is your group having at Harvard University?
The Berkman Klein Center for Internet & Society was created over 20 years ago. It is a multidisciplinary center that works closely with the law school, but also involves engineering, computing, and medicine. Themes touched on by most of the affiliated researchers and groups include economic and social inequality and discrimination based on race, ethnicity, and sexuality. The difference between physical and digital territory is another hot topic. In most cases, governance is established by a few companies located in the Northern Hemisphere, especially in the USA. The question is, do they apply to the entire digital world? Should the rules be the same everywhere?
What’s your opinion?
Of course they shouldn’t. Cultures and habits differ. The inequality between rich and poor countries is enormous. One of my concerns is that the digital world is being governed not only by national governments, but also by companies that can monitor much more than these countries’ secret service agencies and can use the information to totalitarian ends. Another issue is we have to stop the inequalities of the physical world from being mirrored online, but that is not what we are seeing. Here’s one example: American researchers analyzed millions of records on hospital admissions across the country. The algorithm that logged and referred the patients also defined their treatment cost limits based on their health insurance. The researchers discovered that when two people—one black and one white—were admitted to hospital with health problems of the same level of severity, the algorithm assigned a lower treatment budget to the black person. As a result, doctors had less to spend on treating black patients. Most significantly, the data from this healthcare planning program influenced the behavior of hospitals in the health system. The study demonstrated that there was discrimination that could be corrected.
How?
These systems belong to companies, which of course have business objectives. But computing and engineering can play a role in identifying these social flaws, such as discrimination, showing society how these platforms operate and refuting the claims made by the companies. In 2015, then-President Dilma Rousseff was invited to visit the USA. I was part of her entourage as the MCTI’s Secretary of Information Technology Policy. Former US Secretary of State Condoleezza Rice arranged a meeting with Facebook’s Mark Zuckerberg, Google’s Erick Schmidt, and Uber’s Dara Khosrowshah, among others, all in one room with the president in the middle, answering their questions. I was at the back, watching. None of these mega-entrepreneurs asked if Brazil offered any incentives to attract their businesses to the country—instead they asked questions about operations and regulations and what can and cannot be done. Companies, at least most of them, want rules, which we call regulation, because they create security and can minimize problems. One of them, which is very difficult to resolve, is content moderation.
In what way?
Content moderation is local and depends on the language, culture, and political and economic prestige of each region. On Facebook, most of the moderation features are focused on five English-speaking countries; for other languages and countries, the system is much less elaborate. Moderation is also difficult because content accepted by some groups could be considered offensive by others. Could technology solve the problem? Partly, because millions of videos are uploaded every minute and hundreds of millions of posts are published every day. Content moderation requires algorithms and the support of an army of people to deal with situations where the algorithms are unsure whether or not something should be accepted. Algorithms can identify images of child abuse, for example, but in political and religious topics there is a higher level of nuance. It is important for laws to hold companies accountable for unwanted content and for companies to improve their systems. It is a complex problem of this new global digital society.
Are there any rules in this new digital order?
Some countries are formulating them. The European Union has created legislation called the Digital Service Act, which has yet to enter into force. Germany has defined what is classified as illegal content online; the country already had similar offline regulations due to Nazism. One of Europe’s fundamentals is that offline rights and responsibilities must also apply online. In Brazil, Bill 2.630/20 [which proposes the Brazilian Law on Freedom, Accountability, and Transparency on the Internet] establishes these limits, but it is stuck in Congress. What I have seen is that crises often get things moving. The Civil Rights Framework for the Internet was passed after Edward Snowden leaked security information from the USA in 2013, and Brazil’s Personal Data Protection Act was created after the Cambridge Analitica case [a British company that collected information from up to 87 million Facebook users and used it to influence voter opinion in several countries in 2014]. The rules also have to establish limits, because if they simply define certain content as illegal, technology companies fearing fines or penalties may go too far and end up straying into political or economic censorship.
To govern the digital environment we have to bring together everyone involved: governments, businesses, and civil society
You worked at companies for 10 years before becoming a professor at UFMG in 1989. What did you do?
In the fourth and fifth years of my engineering degree I did an internship at the UFMG computing center. Then when I graduated in 1973 I applied for a job in the systems department at Petrobras, in Rio. Two years later I received an offer to return to Belo Horizonte, where I spent eight years working for the systems planning department at CEMIG [Minas Gerais State Electricity Company]. I had to understand how the systems worked and think of ways to improve their function. Everything was located away from people, so much so that the computer was even kept in a protective dome. They offered me the chance to do a master’s degree, so I took it. My advisor at PUC [Pontifical Catholic University] in Rio was Daniel Menasce. We became friends and have written six books together. My years at CEMIG really helped me to focus and pursue results. After finishing my master’s degree, I grew interested in new issues and started thinking about doing a PhD in the US. My father said: “I think it’s insane for you to quit a good job and move to the USA with two young children.” I went anyway, and after many years I saw that my father had been right. Returning to Brazil was difficult, because I had no formal employment relationships, so I spent two years as a scholarship recipient waiting for a position to open at UFMG, since we wanted to continue living in Belo Horizonte.
Tell us about your work with the government.
At the end of 2010 I received a call from Jorge Kalil, a scientist from São Paulo, who invited me and some other members of the Brazilian Academy of Sciences to dinner at his house to talk about science and technology. Senator Aloizio Mercadante, who would later become the Brazilian Minister of Science, Technology, and Innovation (MCTI), was also there. He wanted me to lead one of the MCTI’s five departments as Secretary of Information Technology Policy. I had never worked in government. I was worried, because Brasília is another world, but a friend of mine from the philosophy department gave me some interesting advice: “In relationships in Brasília, always stay in the dining room, never go into the kitchen. Keep a formal distance.” It was very useful. I spent five years in Brasilia in the end.
How was it?
The role of my department was to formulate and monitor what at the time was called information technology policy—today it might be called digital policy. As secretary, I was also the head of Brazil’s Internet Steering Committee, (CGI), created in 1995. At the CGI, I learned about the problems of internet governance, that it would be necessary to establish rules so that society could monitor how this digital territory functions and participate in managing it. The internet was starting to grow a lot, businesses and governments were starting to use it, but there were still no rules. Then something happened that placed greater importance on the department and led to me playing a more active role than I had expected.
What was that?
In late 2013, Snowden released the explosive revelations about the US government spying around the world. They even showed that President Dilma and Brazilian companies, such as Petrobras, had been spied on. The President formally appointed a committee to address the issue: “Our country needs to respond. We need to think about the rules for the internet,” he recommended. The Snowden case left the world worried, not knowing what to do about growing digital espionage. There was also discomfort with the fact that ICANN [Internet Corporation for Assigned Names and Numbers], the organization responsible for establishing domain names and internet addresses, was based in California and was therefore subject to local legislation, even though its influence was global. Brazil’s response was to highlight the need for a global meeting. At the end of 2013, while I was at a meeting in Seoul, South Korea, I received a call from MCTI Minister Marco Antonio Raupp [1938–2021]. “Go to Bali in Indonesia. We are getting things moving on arranging an international meeting in Brazil to discuss this issue of espionage and the future of the internet,” he said. I was involved in several meetings in London with the aim of setting up a major international meeting, which became known as NETmundial. It wasn’t a government meeting. Internet governance can be difficult to define because it’s not just about governments. Submarine cables and the entire internet infrastructure are owned by private telecommunications companies. The development of digital content and services belongs to society. The servers run free software. This is a world that does not belong to governments, although governments increasingly claim to govern it. It’s a multisectoral space involving civil society, the private sector, and academia, since it all started from military-funded academic projects in the USA. To govern it, you have to bring all these participants together. That was precisely the Brazilian CGI’s mission. There are 21 members, nine from the government, four from the private sector, four from NGOs [nongovernmental organizations], three from the academic community, and an internet expert appointed by the ministry. No sector alone has the majority of votes.
How do you reach decisions?
Everything has to be negotiated. Negotiations take time, but once a consensus is reached, the decision is better accepted and for longer than if it were unilateral. During the five years that I led the committee, I avoided holding votes, because they separate people and create groups. It’s better if the process takes a little longer and identifies a common denominator. It’s very difficult, but with patience, it works. I was just the coordinator, trying to find a consensus. To make NETmundial viable in April 2014, the President worked with congressional leaders to pass the Civil Rights Framework for the Internet. The big question was so-called net neutrality. Companies didn’t want it, but the surrounding politics led Congress to approve it, and at the opening of the NETmundial meeting on April 23, 2014, the President sanctioned the Brazilian Civil Rights Framework for the Internet.
What is net neutrality?
Communications on the internet use protocols called TCP/IP [Transmission Control Protocol/Internet Protocol]. Content is broken up into small packets and sent across the network. It has an origin and a destination. Network neutrality is the principle that the companies that transmit these packets cannot treat them differently based on origin or destination. They have to be neutral, they cannot interfere. At the time this was important because it established that a telecommunications company cannot block Skype data, for example, even if it competes with its voice service. It also meant these companies could not give special treatment to one particular service or another.
I want to identify other forms of collaboration that preserve individuality without allowing an algorithm to do everything
Have companies accepted net neutrality?
It was a difficult negotiation, but the Civil Rights Framework for the Internet gave companies a guarantee, because it established a stable and secure environment. It states that any content can only be removed with a court order. If it weren’t for this legislation, the confusion of the 2018 election could have been even greater, because politicians could have attempted to remove any content from the internet that they didn’t like. Brazil was one of the pioneers in net neutrality and multisectoral management of the internet. It earned us enormous respect. NETmundial was attended by 1,100 people from over 100 countries, including state ministers and 100 international journalists. The debates were live and there were 30 international hubs. Groups discussed the topics in separate rooms, amending texts on a screen that was projected for everyone to see what was being worked on. At the formal meetings, there was a large stage with four microphones: one for government representatives, one for the private sector, one for civil society, and one for the technical and academic community. After a representative from one group spoke, the next speaker could not be from the same group—they had to wait until the other three had all had a chance to speak first.
What was the result of the conference?
Two documents: the first was 10 principles for internet governance, and the other a roadmap for the future. They were approved by acclamation. Only three countries—India, Cuba, and Russia—did not approve the documents. The prestige that Brazil gained from the initiative was massive. In 2017, I was invited to join an international commission of 25 people to discuss cyberspace security standards. This commission met in several countries over the course of two years, but never in Brazil because of changes in the government. We lost relevance.
In April you were named Oscar Sala Chair of the Institute of Advanced Studies at USP. What are your plans?
The topic I chose is human-algorithm interaction, because we are influenced by algorithms all the time. When you’re watching a movie on Netflix, the algorithm suddenly suggests something else, and often you accept and change what you’re watching. These so-called recommendation algorithms try to direct our behavior. There is often a herd effect, when the algorithm leads people in an unwanted direction, ignoring what they really want. We need to better understand this reduction in the function of human beings and increase in the function of algorithms. There’s an interesting term that is relevant here: complacency. When people get used to things and simply follow what Google says. But is Google’s recommendation really what you should be looking at? I want to understand this interaction from a broader perspective, not just in terms of computing, but in relation to the human sciences too. Another thing I want to do is identify other forms of collaboration that preserve individuality without allowing an algorithm to do everything.