Prêmio CBMM
Imprimir Republish

Demi Getschko

Demi Getschko: An Internet builder

Entrevista_DemiLÉO RAMOSDemi Getschko was the first Brazilian to be elected to the Internet Hall of Fame, an honor granted by the Internet Society (ISoc), a non-governmental organization consisting of representatives from all over the world whose objective is to promote the evolution of the Internet. Getschko’s contribution was to ensure that the world computer network was a success in Brazil during its early years. He was in charge of the FAPESP Data Processing Center (DPC) at the Foundation’s headquarters in the Lapa neighborhood, São Paulo, in 1991 when, he himself says, “the first Internet packets pinged.” It was Brazil’s first contact with the novelty that would innovate many aspects in the lives of people and institutions. Through direct agreements with the administrators of US academic networks, Demi Getschko and the FAPESP DPC staff obtained control of the .br domain suffix corresponding to Brazil in web addresses and emails.

Age:
61
Specialty:
Computer Networks
Education:
University of São Paulo (USP) Polytechnic School (undergraduate through PhD)
Institutions:
Dot BR Information and Coordination Center (NIC.br) and Pontifical Catholic University (PUC-SP)

With implementation of the Internet and its rapid expansion, which took place first in academia, Getschko, while head of the FAPESP DPC, coordinated operations for the National Research Network (RNP) that linked the major universities in Brazil. He also helped implement and manage the Academic Network of São Paulo (ANSP), the São Paulo university network provider. As a participant in this process, he has been a member of the Brazilian Internet Steering Committee (CGI) since September 1995. In 2005, he was invited to set up and preside over the Dot BR Information and Coordination Center (NIC.br), an entity which acts as the executive arm of the CGI and coordinates network services in Brazil. In recent years, he has actively participated in drafting the landmark civil Internet law, approved this year by the Brazilian Congress. Before heading NIC.br, he was also a member of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and, after leaving FAPESP in 1996, he was chief technology officer of the news agency Agência Estado and Internet provider IG. An electrical engineer who completed all of his studies through the PhD level at the University of São Paulo (USP) Polytechnic School, Getschko is now a professor at Pontifical Catholic University (PUC-SP).

What was it like to be elected to the Internet Hall of Fame?
The Internet Society (ISoc) began electing people to its Hall of Fame three years ago. ISoc was formed in 1992 by Robert Kahn, Vint Cerf and Lyman Chapin (US pioneers in Internet technology) when the Internet was opened to the community beyond academia. ISoc decided to create this type of recognition. There are three distinct categories: pioneers, innovators and global connectors. The first includes those who made profound contributions to Internet technology and developed the protocols of the TCP/IP (Transmission Control Protocol/Internet Protocol) family. These include, for example, Vint Cerf, Robert Kahn, Jon Postel, Steve Crocker and others. Innovators are those who have built tools to operate over the basic structure of the Internet. They include researchers like Tim Berners-Lee, who created the web, an extremely important application on the Internet. The third category is that of the global connectors, who became involved with the spread of the network and supported the Internet in various locations around the world. It was in this third category that I was remembered.

Are you the only member from Latin America?
I was the second to be elected from Latin America. Last year, Ida Holz, from the University of the Republic in Montevideo, Uruguay, was also elected in the Global Connectors category. She is well known in the field because she participated in the creation of several academic networks. I was the first in Brazil and the second in Latin America. But let’s recognize something very important: designating only one person is absolutely unfair, because the work is always collective. Since it cannot elect a team, it chooses one or two members. So, I wanted to make clear here that no one did anything alone. And I’m one of the members of the team that brought academic networks to Brazil, and there were people from FAPESP, the RNP, the National Scientific Computing Laboratory (LNCC), the Federal University of Rio de Janeiro (UFRJ), and many more who set up academic connections in Brazil in the late 1980s. For some reason they ended up nominating me, perhaps because I have been in the field more or less constantly.

Are you part of the Internet Society?
The Internet Society (ISoc) is a non-governmental organization based in the United States, with chapters throughout the world. I am part of the Brazilian chapter, which has about 300 members. The main Internet Society is maintained with funds from registration of .org domains, which is operated by the Public Interest Registry (PIR). So, everything registered under .org generates resources that are allocated to ISoc. Similarly, .br suffix domain registration generates resources allocated to CGI and NIC. One of ISoc’s main activities is to coordinate the meetings of the Internet Engineering Task Force (IETF), the entity that generates Internet standards. The IETF is coordinated by the Internet Architecture Board (IAB), which maintains the orthodoxy of the Internet in order to observe and preserve the original principles of the network.

What are these original principles of the Internet?
The Internet was designed to be an open, single network. We hope that it will not fragment. When tensions in China, Russia or elsewhere increase, there are threats of fragmentation. The Internet is a cooperative network and the root of its names is unique. When you type in a name that ends in .com, for example, it is resolved in a unique way: there aren’t two ways of naming a device on the network. Additionally, another basic principle is that it needs to always be neutral between the two endpoints: the sender and the receiver. If you’re in Australia and I’m in Brazil, no one in the middle of the network would have the right to meddle in the packets and their content, in the services and protocols used. The function of the “middle” of the network is to carry information (packets) from one point to another in the network. It’s a big “dispatcher” of packets. The network never questions the merit of what it is carrying; it just forwards it on. Of course, over time, things appear along the way, such as deliberate attacks on sites, which may slightly affect the end-to-end concept, the original idea of the Internet. Another source of tension is the fact that the Internet represents a break with a number of pre-existing models. One is the traditional standard-generating model. The Internet does not have a formal process involving governments and large telecommunications companies, such as what occurs in the International Telecommunication Union (ITU). Instead, it is a process open to persons and entities from any area, whether academic, technical or commercial, that want to participate. The volunteers meet three times a year, always representing themselves, not institutions, to discuss and generate standards that continue to sustain and foster the network. Another feature is that, since it is based on the open standards TCP/IP, anyone is free to generate applications on this foundation without any type of license or permission: what is known as permissionless innovation. No one asked if they could launch Twitter or Facebook. Do you have an idea? Implement it and put it on the network. If it is a success, great, you can become a millionaire; if not, you had better think of another idea. These are typical characteristics of the Internet that do not exist in telephony or telecommunications.

There are proposals in the United States to increase the telecommunication companies’ share of tax collection, due to high growth in video usage of the network.What do you think of this, and what is the situation like in Brazil?
It is important to manage traffic for the benefit of all. One way to do this is to deploy Traffic Exchange Points (TEPs), or Internet Exchange Points (IXPs). In Brazil, the most important traffic point is in São Paulo, which has already reached a peak flow of 500 gigabytes per second, which is a significant number. In the ranking of countries that most use TEPs to exchange traffic, we are fourth or fifth. What we see today is a change in the São Paulo TEP traffic profile, and São Paulo is a good representative sample for Brazil. Before, we had a peak at 11:00, it fell slightly at lunchtime and rose again at 14:00, reached a maximum at 16:00 and then began to fall until dawn. This changed about six months ago. The peak at 11:30 remains, with a dip during lunchtime. It increases at 14:00 and rises until 16:30, when it starts to fall, but then around 18:30 or 19:00, it begins to rise again and reaches the daily peak at 22:00 or 23:00.

Why?
Because traffic is increasingly affected by entertainment applications. People are using more bandwidth at home than in the office because they watch movies at home, not at work. And Sunday, which was a very low traffic day, now has more traffic than Monday or Tuesday. This means a change in the traffic profile towards entertainment, not just commerce, information, services, etc. A movie uses much more bandwidth than access to a bank account, for example. The discussion on this is complicated, and also includes the debate on neutrality. As an aside, at the end of April we held NETmundial in São Paulo, an event that generated an important final document. I was part of the team that negotiated and drew up this document, which was a consensus-seeking process. Consensus is something that theoretically, is not unacceptable to participants—it might displease each one a little, but equally. In this consensus document, the term “neutrality” does not appear, but the important concept “end-to-end” was preserved, that no middleman can interfere with the data packet traveling over the network. Why doesn’t neutrality appear? Because this word is very loaded, semantically, today. What is meant by neutrality in the United States is not the same as in Europe or India. It is difficult to define, and someone will always say that he does not accept the definition of another even without knowing what it is. Let me give you an example. Neutrality is easily understood when dealing with telecommunications, but this is not the case when it comes to net neutrality, where there are numerous layers and contexts in which neutrality is something to be upheld. Look what happens, for example, with a cable TV subscription. If a new channel that I do not subscribe to appears tomorrow, I could be unaware of its existence, even if it is a channel with content that interests me. For example, Twitter appeared on the Internet a short time ago. Everyone could learn about this new service and adopt it or not. There are no service subscriptions on the Internet, you can access everything on it. Unlike the world of cable television, which is a “walled garden.” This is not the model we would like for the Internet, so we strive for neutrality: the network must be open to any innovation or service and such things should be available to all users. All of us should have access to the full Internet experience. No one can say that a person can only watch videos on YouTube, or only access e-mail online. A “walled garden” limits navigation to a maximum of what already exists: if something new appears, users may not even be aware of it. This defeats neutrality. In short, we need to think of neutrality as something qualitative. We shouldn’t distinguish between different types of content or services. Quantitatively, if I want more bandwidth, I have to pay more. If I want 10 megabits per second, it will be more expensive than 1 megabit. However, no matter what your bandwidth is, 1 or 10 megabits, you should still have access to the full web, without blocking or “walls.”

And what about the Brazilian Internet legislation that was approved?
The concept and the realization of the need for legislation to govern the Internet in Brazil began with the discussion and approval of the CGI’s 10 commandments. The legislation was under discussion for a long time, with several public hearings and more than 2,000 individual contributions before reaching its final format. A draft law was prepared and discussed with a lot of input and, basically, a search for consensus. Alessandro Molon (PT-RJ, the Congressman who presented the draft law, worked hard to ensure that the law would pass and, at every stage, fought to maintain the legislation’s three fundamental pillars, based on the CGI’s 10 commandments: neutrality, user privacy and proper accountability of the value chain.

What do you mean by proper accountability of the value chain?
When determining who is responsible for some abuse of the network, there is always the tendency to take the easiest or most visible route. For example, say that there is a problematic video on YouTube—and this occurred, for example, with a video on the beach of an artist, Daniella Cicarelli, about seven years ago. Someone was offended by the video and filed a lawsuit to get it removed. I will not get into the merits of the video in question, if it was good or bad, but it does not seem reasonable to take all of YouTube off-line because of a specific video, but that was what a judge decided to do at the time. So, Cicarelli’s video was no longer accessible, but a huge number of other videos were also unavailable, and they had nothing to do with the alleged abuse. Who is responsible in the case of this video by Cicarelli? It does not seem to have been the video provider [YouTube], but rather the person who made the video. If someone is going to be held responsible, it should be the person who committed the abuse, and not the middleman. The messenger is not responsible for the message. If I receive a letter that offends me, I will not blame the mailman. You could even ask YouTube to take this particular video off the air, because the courts considered it inappropriate and, if technically feasible, the provider has to remove it to obey the court order. But, if the provider were to be held responsible automatically for everything hosted by it, we could reach a situation in which, if there were a page which displeased someone, and that person held the provider accountable, when notified of the complaint, this provider would certainly remove whatever caused the complaint, for fear of being sued. Even if the page content was nothing unusual. This would likely create an environment of self-censorship.  So we need to hold the true author accountable, to avoid the growth of the specter of self-censorship.

What is the role of the Internet Corporation for Assigned Names and Numbers (ICANN)?Is this body connected to the United States government?
ICANN is a nonprofit organization, based in California, whose board is made up of 16 people from all over the world, but it has nothing to do with Internet traffic. Members of the ICANN board are from different industries, just like that of CGI. Three Brazilians have been on the ICANN board: Ivan Moura Campos, Vanda Scartezini and me. I was elected to the board for five years by the ccNSO [Country Code Names Supporting Organization], an organization linked to the country code domains. It also has some weaknesses. The shortcoming, for example, of not being international legally, as it is governed by California law. If some judge there decides something strange based on local law, it could affect the Internet. The problem with ICANN is that it takes care of something quite limited, but important, which is the root phonebook responsible for translating a domain name into an IP number. Every address ending in .br, for example, needs to be converted into a number. But someone needs to be the manager of the list and explain how to get to .br so that, later, the entity responsible for .br, in this case NIC.br, completes the translation of, for example, usp.br or fapesp.br. Thus, the roots of this “tree” for translating names into numbers is cared for by ICANN. In addition, ICANN has another critical task, which is to distribute IP numbers to regional bodies known as Regional Internet Registries (RIRs), which then distribute them to institutions and end users. Here in Brazil, since 1994, we receive IP numbers from LACNIC and distribute them throughout Brazil, autonomously.

And does all the traffic on the Internet pass through the United States?
Well, it depends more on geography and global engineering telecommunications design rather than ICANN, which, as I said, works with names and numbers. Traffic has to do with the location of the submarine optical fiber, and the large data switching centers. Brazilian fibers, for example, are mostly connected to the United States, and from there connect to others in Europe, Asia and Africa. It is the result of telecommunications engineering modeling, not the Internet, which just uses these cables as a medium. Due to this, the United States ends up being a very important traffic center, topologically, and if it uses this to spy on the traffic, its work is facilitated because most of the traffic passes through there.

And is this misconception common?
Yes, but the confusion can also be on purpose, because interests are involved. There are privileged points in the network, where a monitor would catch almost everything that passes. It would be like placing cameras and sensors in the Praça da Sé subway station in downtown São Paulo—everyone passes through there. If we were to monitor that station fully, we would be monitoring a large portion of all the traffic passing through the subway system. Of course, illegal monitoring of telecommunications and the Internet is deplorable, and everyone should be against it, but this is not the fault of the Internet itself, which takes the blame. What would be the fault of the Internet? Leaking e-mail, that was stored in some system, is a typical case related directly to the Internet. But the cases denounced by Snowden [Edward Snowden, a former analyst at the United States Central Intelligence Agency who revealed cases of espionage by the American government] were either leaks from submarine cables and, therefore, telecommunications, or were bugs on cellular telephones, which is also telecommunications. The Internet became part of this unintentionally, and is paying for a problem that was not its fault.

And Julian Assange, from Wikileaks [website that revealed secret United States documents]?
Those were also leaks, mainly of telegrams and submarine cables, in other words, telecommunications. The information is collected from somewhere and spreads throughout the network. If there were a leak in email, for example, it would be an Internet problem. In China, some time ago, the government wanted to find out who the owners of some blogs were, and some companies, including American companies, collaborated and the  government uncovered some activists. This is certainly a fault that can be attributed to the Internet, and clearly is not good. So, I’m not saying that the Internet is not to blame, sometimes it is, but only in the proportion that concerns it.

I would also like you to talk a bit about the Brazilian Internet Steering Committee.Was it an example for other countries?
Yes. Brazil was very fortunate to create a light, multi-industry entity like the CGI. We continue to receive compliments and citations in several places. When the president of ICANN visits a country, he always praises the Brazilian model and suggests they imitate it. The CGI was established in 1995, underwent some reformulations, and is currently in the configuration defined by decree in 2003.

What reformulations?
The composition of the CGI has changed slightly, both in terms of the number of members and representation. In our current configuration, since 2003, we have 21 members, with nine from the government and 11 from civil society, elected by the respective segments. The nine government members do not have a term of office because they remain until another representative is appointed by the corresponding minister. Sometimes the minister himself is the representative on the CGI. The 11 elected directly by their communities have three-year terms. There are three chairs for academia, four for the third sector, and four for business, with the last four distributed thus: one for business users, one for access and service providers, one for infrastructure providers and one for software- and hardware-related businesses. Note that the government does not have a majority in the CGI. The CGI coordinator, for historical reasons since its creation, has always been the representative nominated by the Ministry of Science, Technology and Innovation. In theory we would have a situation in which 12 oppose 9 in a vote, but in terms of the Internet and consensus, this would not be good. There has never been a vote in which a slight majority won. Whenever possible, voting is replaced with consensus.  We rarely vote, and when we do, the result is 20 to 1, or 19 to 2, for example. Another important aspect is that the CGI has no enforcement or regulatory power. It generates good standards, implements measures, holds courses in specific fields and takes actions in favor of the Internet in Brazil.

Does TCP/IP continue to be strong, despite all the changes?Has your vision of this remained the same?
Yes. For a long time, TCP/IP, Ethernet and other standards have been the only practical option, and there has been no discussion. Before that, in the 1980s, there was a plethora of options and there were long discussions before choosing the standard to be used in each case. Not to mention that the options were linked to manufacturers. For networks, for example, IBM users used SNA and Token Ring, Digital users used Decnet, etc. All this consolidated around a single dominant standard, which is still TCP/IP. This discussion of the 1980s no longer occurs today, and nobody talks about long-range networks without automatically referring to TCP/IP, and there are no local networks that are not using Ethernet. Of course, the people who research protocols will not let everything remain frozen for another 30 years. TCP/IP has shown itself to be magnificently flexible, scaling from kilobytes to gigabytes and terabytes.  It is still advancing and still has vitality, but there is no guarantee that it will continue like that forever. Standards wear out and are replaced by other standards or updated. Today there is great pressure from the field of network research to develop and test alternatives, which is always very healthy. If the alternative is good, it ends up replacing what exists. If not, nothing happens. Today, pragmatically, we don’t have a viable commercial alternative to TCP/IP, but maybe five years from now things will be different. People begin to get restless and want to change everything.

You were the head of the DPC when FAPESP made the first connection to the Internet from Brazil.How did you come to work for the Foundation?
I attended the Polytechnic School starting in 1971 to study electrical engineering and, before the end of the year, I became an intern at the Electronic Computer Center (CCE). At the Polytechnic School, I majored in electronics with an emphasis on telecommunications, but always worked in digital systems. In 1976, after I had graduated and Professor Geraldo Lino de Campos of the Polytechnic School was the director of the CCE, FAPESP had installed a computer, experimentally, in a two-story house on Pirajussara Street, near the “Rei das Batidas,” bar near the main entrance to USP. It was a Burroughs 1726, an excellent machine with a very interesting operating system. There, Geraldo developed the Sirius system, which would manage FAPESP grants and research. I was also from the CCE and he called me to participate in the development of Sirius. I would work there at night. I began going to the house on Pirajussara Street three times a week, turned on the Burroughs and tried to write some of the programs that made up Sirius. At around that time I met Professor Oscar Sala, physicist and member of the FAPESP Board. He was the one who struggled to bring computers to the Foundation and who had fought to get the B1726. At that time FAPESP was in a building on the Avenida Paulista and managed all of its scholarships and grants on paper, manually.

How did the information arrive at Pirajussara Street?
On paper or on tape. It was not on-line. We were still testing the system and we received data sets from the main office on paper. FAPESP had finished building the current building in the Lapa neighborhood and intended to set up a data center there. I still remember the day in which we moved the equipment from Pirajussara Street to the new headquarters. The B1726 was transported in a half-open truck. I went with it and prayed for it to not rain… If it had rained, the B1726 could have been destroyed. Luckily, the day was sunny and everything turned out well: we got to Pio XI Street intact. With the computer up and running at the new headquarters, it ceased to be experimental and a definitive, stable team was needed at FAPESP. Since I was associated with the CCE, where I worked full-time, that was the end of my participation in the Foundation’s computerization initiative. Vitor Mammana de Barros, an engineer who had left the CCE, took over the FAPESP data center. He worked as IT manager at FAPESP and I still had some contact with him to adjust the programs that we had already developed at Pirajussara Street. In 1985, when the CCE underwent some major restructuring, generating general uncertainty, Professor Alberto Carvalho e Silva, who was president of the FAPESP Executive Board, called me in for a chat and said that Vitor would be returning to the CCE, which was being restructured. He asked me if I would like to take over the data center. I knew the machine and Sirius well, and was very interested. Additionally, I had finished my master’s degree at the Polytechnic School in 1982, and I liked the idea of going to FAPESP because it would allow me to continue working on my doctorate, in addition to working on something with which I was already involved and liked. So, I decided that I would work for the Foundation and look after the small team of three or four analysts responsible for computerizing the internal administrative processes and grant awards for researchers.

So it was at that time the academic community began to use email?
Yes. The physicists, for example, did their master’s and PhDs abroad, and wanted to maintain contact with researchers in other countries. Outside Brazil, e-mail was already being used extensively, but we didn’t yet have it in Brazil. So then we began to research how to bring it here. The demand came not only from USP researchers, but also from those at Unicamp and Unesp. Thus, professor Sala decided that, if so many people needed it, it would be best if FAPESP assumed the role of providing the service and that we should try to find a solution. I called Alberto Gomide, a brilliant software professional who had already worked at the CCE and was then at Unesp. There were also some others, such as Joseph Moussa, a mathematician, Vilson Sarto, an engineer, and others I cannot remember. Sala had excellent contacts with Fermilab, a high energy physics laboratory in Batavia, near Chicago. They agreed to connect with us, because after all we needed to connect with someone. In 1987, at a meeting at the Polytechnic School on academic networks, we discovered that there were also other groups in Brazil trying to establish connections with international academic networks. The people at that meeting included Michael Stanton, from PUC-RJ, Tadao Takahashi, from the CNPq, and the person who would lead the future RNP, Paulo Aguiar, from UFRJ. We already had some experience with networks because we had set up the first phase of a USP network, which was a network of Burroughs 6700 computer terminals at the university. At that meeting in 1987, we saw that both FAPESP and the LNCC were trying to establish an international connection, and they both had chosen to connect to a very simple network, which researchers were very fond of at the time: the Bitnet. There was also a proposal to create a national network, which would be the future RNP, but we didn’t yet know  what standards would be used.

Was that before PCs?
PCs were beginning to spread, but they were not connected to a broad network, just local networks. We had some at FAPESP. But, returning to the Bitnet, the LNCC connected to it in September 1988, one month before the Foundation did. The LNCC’s connection was with the University of Maryland, and ours was with Fermilab, but we always helped each other out. When we connected, we asked to connect five computers: USP, IPT, Unicamp, FAPESP and Unesp. Then the Bitnet managers in the United States told us that connecting five new nodes to this network, and all in Brazil, was more like connecting a new sub-network [Demi Getschko was the first Brazilian to be elected to the Internet Hall of Fame, an honor granted by the Internet Society (ISoc), a non-governmental organization consisting of representatives from all over the world whose objective is to promote the evolution of the Internet. Getschko’s contribution was to ensure that the world computer network was a success in Brazil during its early years. He was in charge of the FAPESP Data Processing Center (DPC) at the Foundation’s headquarters in the Lapa neighborhood, São Paulo, in 1991 when, he himself says, "the first Internet packets pinged." It was Brazil’s first contact with the novelty that would innovate many aspects in the lives of people and institutions. Through direct agreements with the administrators of US academic networks, Demi Getschko and the FAPESP DPC staff obtained control of the .br domain suffix corresponding to Brazil in web addresses and emails. With implementation of the Internet and its rapid expansion, which took place first in academia, Getschko, while head of the FAPESP DPC, coordinated operations for the National Research Network (RNP) that linked the major universities in Brazil. He also helped implement and manage the Academic Network of São Paulo (ANSP), the São Paulo university network provider. As a participant in this process, he has been a member of the Brazilian Internet Steering Committee (CGI) since September 1995. In 2005, he was invited to set up and preside over the Dot BR Information and Coordination Center (NIC.br), an entity which acts as the executive arm of the CGI and coordinates network services in Brazil. In recent years, he has actively participated in drafting the landmark civil Internet law, approved this year by the Brazilian Congress. Before heading NIC.br, he was also a member of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and, after leaving FAPESP in 1996, he was chief technology officer of the news agency Agência Estado and Internet provider IG. An electrical engineer who completed all of his studies through the PhD level at the University of São Paulo (USP) Polytechnic School, Getschko is now a professor at Pontifical Catholic University (PUC-SP). What was it like to be elected to the Internet Hall of Fame? The Internet Society (ISoc) began electing people to its Hall of Fame three years ago. ISoc was formed in 1992 by Robert Kahn, Vint Cerf and Lyman Chapin (US pioneers in Internet technology) when the Internet was opened to the community beyond academia. ISoc decided to create this type of recognition. There are three distinct categories: pioneers, innovators and global connectors. The first includes those who made profound contributions to Internet technology and developed the protocols of the TCP/IP (Transmission Control Protocol/Internet Protocol) family. These include, for example, Vint Cerf, Robert Kahn, Jon Postel, Steve Crocker and others. Innovators are those who have built tools to operate over the basic structure of the Internet. They include researchers like Tim Berners-Lee, who created the web, an extremely important application on the Internet. The third category is that of the global connectors, who became involved with the spread of the network and supported the Internet in various locations around the world. It was in this third category that I was remembered. Are you the only member from Latin America? I was the second to be elected from Latin America. Last year, Ida Holz, from the University of the Republic in Montevideo, Uruguay, was also elected in the Global Connectors category. She is well known in the field because she participated in the creation of several academic networks. I was the first in Brazil and the second in Latin America. But let’s recognize something very important: designating only one person is absolutely unfair, because the work is always collective. Since it cannot elect a team, it chooses one or two members. So, I wanted to make clear here that no one did anything alone. And I’m one of the members of the team that brought academic networks to Brazil, and there were people from FAPESP, the RNP, the National Scientific Computing Laboratory (LNCC), the Federal University of Rio de Janeiro (UFRJ), and many more who set up academic connections in Brazil in the late 1980s. For some reason they ended up nominating me, perhaps because I have been in the field more or less constantly. Are you part of the Internet Society? The Internet Society (ISoc) is a non-governmental organization based in the United States, with chapters throughout the world. I am part of the Brazilian chapter, which has about 300 members. The main Internet Society is maintained with funds from registration of .org domains, which is operated by the Public Interest Registry (PIR). So, everything registered under .org generates resources that are allocated to ISoc. Similarly, .br suffix domain registration generates resources allocated to CGI and NIC. One of ISoc’s main activities is to coordinate the meetings of the Internet Engineering Task Force (IETF), the entity that generates Internet standards. The IETF is coordinated by the Internet Architecture Board (IAB), which maintains the orthodoxy of the Internet in order to observe and preserve the original principles of the network. What are these original principles of the Internet? The Internet was designed to be an open, single network. We hope that it will not fragment. When tensions in China, Russia or elsewhere increase, there are threats of fragmentation. The Internet is a cooperative network and the root of its names is unique. When you type in a name that ends in .com, for example, it is resolved in a unique way: there aren’t two ways of naming a device on the network. Additionally, another basic principle is that it needs to always be neutral between the two endpoints: the sender and the receiver. If you’re in Australia and I’m in Brazil, no one in the middle of the network would have the right to meddle in the packets and their content, in the services and protocols used. The function of the "middle" of the network is to carry information (packets) from one point to another in the network. It’s a big "dispatcher" of packets. The network never questions the merit of what it is carrying; it just forwards it on. Of course, over time, things appear along the way, such as deliberate attacks on sites, which may slightly affect the end-to-end concept, the original idea of the Internet. Another source of tension is the fact that the Internet represents a break with a number of pre-existing models. One is the traditional standard-generating model. The Internet does not have a formal process involving governments and large telecommunications companies, such as what occurs in the International Telecommunication Union (ITU). Instead, it is a process open to persons and entities from any area, whether academic, technical or commercial, that want to participate. The volunteers meet three times a year, always representing themselves, not institutions, to discuss and generate standards that continue to sustain and foster the network. Another feature is that, since it is based on the open standards TCP/IP, anyone is free to generate applications on this foundation without any type of license or permission: what is known as permissionless innovation. No one asked if they could launch Twitter or Facebook. Do you have an idea? Implement it and put it on the network. If it is a success, great, you can become a millionaire; if not, you had better think of another idea. These are typical characteristics of the Internet that do not exist in telephony or telecommunications. There are proposals in the United States to increase the telecommunication companies’ share of tax collection, due to high growth in video usage of the network. What do you think of this, and what is the situation like in Brazil? It is important to manage traffic for the benefit of all. One way to do this is to deploy Traffic Exchange Points (TEPs), or Internet Exchange Points (IXPs). In Brazil, the most important traffic point is in São Paulo, which has already reached a peak flow of 500 gigabytes per second, which is a significant number. In the ranking of countries that most use TEPs to exchange traffic, we are fourth or fifth. What we see today is a change in the São Paulo TEP traffic profile, and São Paulo is a good representative sample for Brazil. Before, we had a peak at 11:00, it fell slightly at lunchtime and rose again at 14:00, reached a maximum at 16:00 and then began to fall until dawn. This changed about six months ago. The peak at 11:30 remains, with a dip during lunchtime. It increases at 14:00 and rises until 16:30, when it starts to fall, but then around 18:30 or 19:00, it begins to rise again and reaches the daily peak at 22:00 or 23:00. Why? Because traffic is increasingly affected by entertainment applications. People are using more bandwidth at home than in the office because they watch movies at home, not at work. And Sunday, which was a very low traffic day, now has more traffic than Monday or Tuesday. This means a change in the traffic profile towards entertainment, not just commerce, information, services, etc. A movie uses much more bandwidth than access to a bank account, for example. The discussion on this is complicated, and also includes the debate on neutrality. As an aside, at the end of April we held NETmundial in São Paulo, an event that generated an important final document. I was part of the team that negotiated and drew up this document, which was a consensus-seeking process. Consensus is something that theoretically, is not unacceptable to participants—it might displease each one a little, but equally. In this consensus document, the term "neutrality" does not appear, but the important concept "end-to-end" was preserved, that no middleman can interfere with the data packet traveling over the network. Why doesn’t neutrality appear? Because this word is very loaded, semantically, today. What is meant by neutrality in the United States is not the same as in Europe or India. It is difficult to define, and someone will always say that he does not accept the definition of another even without knowing what it is. Let me give you an example. Neutrality is easily understood when dealing with telecommunications, but this is not the case when it comes to net neutrality, where there are numerous layers and contexts in which neutrality is something to be upheld. Look what happens, for example, with a cable TV subscription. If a new channel that I do not subscribe to appears tomorrow, I could be unaware of its existence, even if it is a channel with content that interests me. For example, Twitter appeared on the Internet a short time ago. Everyone could learn about this new service and adopt it or not. There are no service subscriptions on the Internet, you can access everything on it. Unlike the world of cable television, which is a "walled garden." This is not the model we would like for the Internet, so we strive for neutrality: the network must be open to any innovation or service and such things should be available to all users. All of us should have access to the full Internet experience. No one can say that a person can only watch videos on YouTube, or only access e-mail online. A "walled garden" limits navigation to a maximum of what already exists: if something new appears, users may not even be aware of it. This defeats neutrality. In short, we need to think of neutrality as something qualitative. We shouldn’t distinguish between different types of content or services. Quantitatively, if I want more bandwidth, I have to pay more. If I want 10 megabits per second, it will be more expensive than 1 megabit. However, no matter what your bandwidth is, 1 or 10 megabits, you should still have access to the full web, without blocking or "walls." And what about the Brazilian Internet legislation that was approved? The concept and the realization of the need for legislation to govern the Internet in Brazil began with the discussion and approval of the CGI’s 10 commandments. The legislation was under discussion for a long time, with several public hearings and more than 2,000 individual contributions before reaching its final format. A draft law was prepared and discussed with a lot of input and, basically, a search for consensus. Alessandro Molon (PT-RJ, the Congressman who presented the draft law, worked hard to ensure that the law would pass and, at every stage, fought to maintain the legislation’s three fundamental pillars, based on the CGI’s 10 commandments: neutrality, user privacy and proper accountability of the value chain. What do you mean by proper accountability of the value chain? When determining who is responsible for some abuse of the network, there is always the tendency to take the easiest or most visible route. For example, say that there is a problematic video on YouTube—and this occurred, for example, with a video on the beach of an artist, Daniella Cicarelli, about seven years ago. Someone was offended by the video and filed a lawsuit to get it removed. I will not get into the merits of the video in question, if it was good or bad, but it does not seem reasonable to take all of YouTube off-line because of a specific video, but that was what a judge decided to do at the time. So, Cicarelli’s video was no longer accessible, but a huge number of other videos were also unavailable, and they had nothing to do with the alleged abuse. Who is responsible in the case of this video by Cicarelli? It does not seem to have been the video provider [YouTube], but rather the person who made the video. If someone is going to be held responsible, it should be the person who committed the abuse, and not the middleman. The messenger is not responsible for the message. If I receive a letter that offends me, I will not blame the mailman. You could even ask YouTube to take this particular video off the air, because the courts considered it inappropriate and, if technically feasible, the provider has to remove it to obey the court order. But, if the provider were to be held responsible automatically for everything hosted by it, we could reach a situation in which, if there were a page which displeased someone, and that person held the provider accountable, when notified of the complaint, this provider would certainly remove whatever caused the complaint, for fear of being sued. Even if the page content was nothing unusual. This would likely create an environment of self-censorship. So we need to hold the true author accountable, to avoid the growth of the specter of self-censorship. What is the role of the Internet Corporation for Assigned Names and Numbers (ICANN)? Is this body connected to the United States government? ICANN is a nonprofit organization, based in California, whose board is made up of 16 people from all over the world, but it has nothing to do with Internet traffic. Members of the ICANN board are from different industries, just like that of CGI. Three Brazilians have been on the ICANN board: Ivan Moura Campos, Vanda Scartezini and me. I was elected to the board for five years by the ccNSO [Country Code Names Supporting Organization], an organization linked to the country code domains. It also has some weaknesses. The shortcoming, for example, of not being international legally, as it is governed by California law. If some judge there decides something strange based on local law, it could affect the Internet. The problem with ICANN is that it takes care of something quite limited, but important, which is the root phonebook responsible for translating a domain name into an IP number. Every address ending in .br, for example, needs to be converted into a number. But someone needs to be the manager of the list and explain how to get to .br so that, later, the entity responsible for .br, in this case NIC.br, completes the translation of, for example, usp.br or fapesp.br. Thus, the roots of this "tree" for translating names into numbers is cared for by ICANN. In addition, ICANN has another critical task, which is to distribute IP numbers to regional bodies known as Regional Internet Registries (RIRs), which then distribute them to institutions and end users. Here in Brazil, since 1994, we receive IP numbers from LACNIC and distribute them throughout Brazil, autonomously. And does all the traffic on the Internet pass through the United States? Well, it depends more on geography and global engineering telecommunications design rather than ICANN, which, as I said, works with names and numbers. Traffic has to do with the location of the submarine optical fiber, and the large data switching centers. Brazilian fibers, for example, are mostly connected to the United States, and from there connect to others in Europe, Asia and Africa. It is the result of telecommunications engineering modeling, not the Internet, which just uses these cables as a medium. Due to this, the United States ends up being a very important traffic center, topologically, and if it uses this to spy on the traffic, its work is facilitated because most of the traffic passes through there. And is this misconception common? Yes, but the confusion can also be on purpose, because interests are involved. There are privileged points in the network, where a monitor would catch almost everything that passes. It would be like placing cameras and sensors in the Praça da Sé subway station in downtown São Paulo—everyone passes through there. If we were to monitor that station fully, we would be monitoring a large portion of all the traffic passing through the subway system. Of course, illegal monitoring of telecommunications and the Internet is deplorable, and everyone should be against it, but this is not the fault of the Internet itself, which takes the blame. What would be the fault of the Internet? Leaking e-mail, that was stored in some system, is a typical case related directly to the Internet. But the cases denounced by Snowden [Edward Snowden, a former analyst at the United States Central Intelligence Agency who revealed cases of espionage by the American government] were either leaks from submarine cables and, therefore, telecommunications, or were bugs on cellular telephones, which is also telecommunications. The Internet became part of this unintentionally, and is paying for a problem that was not its fault. And Julian Assange, from Wikileaks [website that revealed secret United States documents]? Those were also leaks, mainly of telegrams and submarine cables, in other words, telecommunications. The information is collected from somewhere and spreads throughout the network. If there were a leak in email, for example, it would be an Internet problem. In China, some time ago, the government wanted to find out who the owners of some blogs were, and some companies, including American companies, collaborated and the government uncovered some activists. This is certainly a fault that can be attributed to the Internet, and clearly is not good. So, I’m not saying that the Internet is not to blame, sometimes it is, but only in the proportion that concerns it. I would also like you to talk a bit about the Brazilian Internet Steering Committee. Was it an example for other countries? Yes. Brazil was very fortunate to create a light, multi-industry entity like the CGI. We continue to receive compliments and citations in several places. When the president of ICANN visits a country, he always praises the Brazilian model and suggests they imitate it. The CGI was established in 1995, underwent some reformulations, and is currently in the configuration defined by decree in 2003. What reformulations? The composition of the CGI has changed slightly, both in terms of the number of members and representation. In our current configuration, since 2003, we have 21 members, with nine from the government and 11 from civil society, elected by the respective segments. The nine government members do not have a term of office because they remain until another representative is appointed by the corresponding minister. Sometimes the minister himself is the representative on the CGI. The 11 elected directly by their communities have three-year terms. There are three chairs for academia, four for the third sector, and four for business, with the last four distributed thus: one for business users, one for access and service providers, one for infrastructure providers and one for software- and hardware-related businesses. Note that the government does not have a majority in the CGI. The CGI coordinator, for historical reasons since its creation, has always been the representative nominated by the Ministry of Science, Technology and Innovation. In theory we would have a situation in which 12 oppose 9 in a vote, but in terms of the Internet and consensus, this would not be good. There has never been a vote in which a slight majority won. Whenever possible, voting is replaced with consensus. We rarely vote, and when we do, the result is 20 to 1, or 19 to 2, for example. Another important aspect is that the CGI has no enforcement or regulatory power. It generates good standards, implements measures, holds courses in specific fields and takes actions in favor of the Internet in Brazil. Does TCP/IP continue to be strong, despite all the changes? Has your vision of this remained the same? Yes. For a long time, TCP/IP, Ethernet and other standards have been the only practical option, and there has been no discussion. Before that, in the 1980s, there was a plethora of options and there were long discussions before choosing the standard to be used in each case. Not to mention that the options were linked to manufacturers. For networks, for example, IBM users used SNA and Token Ring, Digital users used Decnet, etc. All this consolidated around a single dominant standard, which is still TCP/IP. This discussion of the 1980s no longer occurs today, and nobody talks about long-range networks without automatically referring to TCP/IP, and there are no local networks that are not using Ethernet. Of course, the people who research protocols will not let everything remain frozen for another 30 years. TCP/IP has shown itself to be magnificently flexible, scaling from kilobytes to gigabytes and terabytes. It is still advancing and still has vitality, but there is no guarantee that it will continue like that forever. Standards wear out and are replaced by other standards or updated. Today there is great pressure from the field of network research to develop and test alternatives, which is always very healthy. If the alternative is good, it ends up replacing what exists. If not, nothing happens. Today, pragmatically, we don’t have a viable commercial alternative to TCP/IP, but maybe five years from now things will be different. People begin to get restless and want to change everything. You were the head of the DPC when FAPESP made the first connection to the Internet from Brazil. How did you come to work for the Foundation? I attended the Polytechnic School starting in 1971 to study electrical engineering and, before the end of the year, I became an intern at the Electronic Computer Center (CCE). At the Polytechnic School, I majored in electronics with an emphasis on telecommunications, but always worked in digital systems. In 1976, after I had graduated and Professor Geraldo Lino de Campos of the Polytechnic School was the director of the CCE, FAPESP had installed a computer, experimentally, in a two-story house on Pirajussara Street, near the "Rei das Batidas," bar near the main entrance to USP. It was a Burroughs 1726, an excellent machine with a very interesting operating system. There, Geraldo developed the Sirius system, which would manage FAPESP grants and research. I was also from the CCE and he called me to participate in the development of Sirius. I would work there at night. I began going to the house on Pirajussara Street three times a week, turned on the Burroughs and tried to write some of the programs that made up Sirius. At around that time I met Professor Oscar Sala, physicist and member of the FAPESP Board. He was the one who struggled to bring computers to the Foundation and who had fought to get the B1726. At that time FAPESP was in a building on the Avenida Paulista and managed all of its scholarships and grants on paper, manually. How did the information arrive at Pirajussara Street? On paper or on tape. It was not on-line. We were still testing the system and we received data sets from the main office on paper. FAPESP had finished building the current building in the Lapa neighborhood and intended to set up a data center there. I still remember the day in which we moved the equipment from Pirajussara Street to the new headquarters. The B1726 was transported in a half-open truck. I went with it and prayed for it to not rain… If it had rained, the B1726 could have been destroyed. Luckily, the day was sunny and everything turned out well: we got to Pio XI Street intact. With the computer up and running at the new headquarters, it ceased to be experimental and a definitive, stable team was needed at FAPESP. Since I was associated with the CCE, where I worked full-time, that was the end of my participation in the Foundation’s computerization initiative. Vitor Mammana de Barros, an engineer who had left the CCE, took over the FAPESP data center. He worked as IT manager at FAPESP and I still had some contact with him to adjust the programs that we had already developed at Pirajussara Street. In 1985, when the CCE underwent some major restructuring, generating general uncertainty, Professor Alberto Carvalho e Silva, who was president of the FAPESP Executive Board, called me in for a chat and said that Vitor would be returning to the CCE, which was being restructured. He asked me if I would like to take over the data center. I knew the machine and Sirius well, and was very interested. Additionally, I had finished my master’s degree at the Polytechnic School in 1982, and I liked the idea of going to FAPESP because it would allow me to continue working on my doctorate, in addition to working on something with which I was already involved and liked. So, I decided that I would work for the Foundation and look after the small team of three or four analysts responsible for computerizing the internal administrative processes and grant awards for researchers. So it was at that time the academic community began to use email? Yes. The physicists, for example, did their master’s and PhDs abroad, and wanted to maintain contact with researchers in other countries. Outside Brazil, e-mail was already being used extensively, but we didn’t yet have it in Brazil. So then we began to research how to bring it here. The demand came not only from USP researchers, but also from those at Unicamp and Unesp. Thus, professor Sala decided that, if so many people needed it, it would be best if FAPESP assumed the role of providing the service and that we should try to find a solution. I called Alberto Gomide, a brilliant software professional who had already worked at the CCE and was then at Unesp. There were also some others, such as Joseph Moussa, a mathematician, Vilson Sarto, an engineer, and others I cannot remember. Sala had excellent contacts with Fermilab, a high energy physics laboratory in Batavia, near Chicago. They agreed to connect with us, because after all we needed to connect with someone. In 1987, at a meeting at the Polytechnic School on academic networks, we discovered that there were also other groups in Brazil trying to establish connections with international academic networks. The people at that meeting included Michael Stanton, from PUC-RJ, Tadao Takahashi, from the CNPq, and the person who would lead the future RNP, Paulo Aguiar, from UFRJ. We already had some experience with networks because we had set up the first phase of a USP network, which was a network of Burroughs 6700 computer terminals at the university. At that meeting in 1987, we saw that both FAPESP and the LNCC were trying to establish an international connection, and they both had chosen to connect to a very simple network, which researchers were very fond of at the time: the Bitnet. There was also a proposal to create a national network, which would be the future RNP, but we didn’t yet know what standards would be used. Was that before PCs? PCs were beginning to spread, but they were not connected to a broad network, just local networks. We had some at FAPESP. But, returning to the Bitnet, the LNCC connected to it in September 1988, one month before the Foundation did. The LNCC’s connection was with the University of Maryland, and ours was with Fermilab, but we always helped each other out. When we connected, we asked to connect five computers: USP, IPT, Unicamp, FAPESP and Unesp. Then the Bitnet managers in the United States told us that connecting five new nodes to this network, and all in Brazil, was more like connecting a new sub-network [see more in Pesquisa FAPESP No. 180]. So instead of asking for a connection of the five machines to the Bitnet, it would be better to create a regional sub-network, like others that were connected to the network. As a name for this subnet, Gomide suggested São Paulo Academic Network, Span, but that name already existed, it belonged to NASA, the Space Physics Analysis Network, and we didn’t know. We had to change it, and reversed the order of the letters: Academic Network at São Paulo, ANSP. Was it the first in Latin America? Yes. At the time of the Bitnet, I cannot think of any others. So much so that all of the Bitnet topology in Brazil was defined by FAPESP. Routing on the Bitnet consisted of just one table that described which computers were connected to which machines. This table was updated once a month, to include new machines or alter connections. Not very dynamic routing. To standardize the names a bit, we suggested br be used as a prefix. We had brfapesp, brusp, bruc, at Unicamp, bript etc. The names had only one level, with no "last name," no dot this or dot that. The folks in Rio and other places also began to use the br prefix: brufmg, brufrgs, brufpe, and the network began to spread that way. An email took hours to arrive, sometimes a day, depending on the size of the dispatch queue. But it was great, because, compared to normal mail, there was no comparison—it was much better. How did you register .br for Brazil? The academic network grew, with many protocols and machines. In addition to the Bitnet, we had the HEPNet [High Energy Physics Network], machines connected to UUCP, Fidonet, the Embratel Renpac (x.25), etc. And it was difficult to provide adequate names for the machines. We went after the "last name" .br and requested it be registered, which was done on April 18, 1989 by Jonathan Postel, IANA administrator at the University of Southern California, where the root of the Internet was managed. There was no more formal interaction, except with the people involved in academic networks. Nor was there any intervention of any kind from the American government, or the Brazilian government, or the Foreign Ministry. It was something within the community of those operating academic networks, as was usual on the Internet. Postel thought we had enough maturity to be the focus of the .br suffix and decided to serve the local community, and therefore delegated the .br suffix to the team that operated the FAPESP network. And the Internet in Brazil? Well, we were granted the .br suffix on April 18, 1989, but near the end of that year it became clear that the Bitnet had begun to wither, and the Internet, which had far more resources, would end up prevailing and absorb the Bitnet and, probably, the other alternatives as well. The Bitnet was good for email and mailing lists, but it was very limited for interaction and remote access to computers, besides the fact that it was growing much slower. At that time we asked the folks at Fermilab to take us with them when they migrated to the Internet. So you had seen the movement and knew about the Internet. In 1989, a backbone was being established for the United States Department of Energy, to which Fermilab was connected, and this backbone, like the NSFNET [of the National Science Foundation], would use TCP/IP and become part of the Internet. Fermilab intended to migrate as soon as possible to this newly created backbone, called ESNet [Energy Sciences Network], and that happened in 1990. As we were connected to them, we worked to implement TCP/IP on the FAPESP machine and, in January 1991, we exchanged the first TCP/IP packets, using a software package that implemented TCP/IP on DEC machines. I can’t remember the precise date, but it was in January, when FAPESP shuts down for vacation. Joseph Moussa [FAPESP DPC employee] was there and we received a tape with the program that would implement TCP/IP. Joseph installed the program, it worked, and the first Internet packets began to arrive at FAPESP. What was it like at the beginning? The connection that sustained the Brazilian academic network via RNP belonged to FAPESP, initially a poor 64 Kbps line that later was increased to 128 kbps, then 256 Kbps, and finally to 2 Mbps. I was the coordinator of RNP operations, which were based at FAPESP, and due to organizational issues, we asked Embratel directly for the lines that RNP would use for its backbone. RNP would pay for the domestic lines and equipment allocated at the points in the various states and the Foundation would pay for the international connection. The first RNP backbone was completely designed in a meeting at FAPESP with Michael Stanton, Alexandre Grojsgold of the LNCC, and Alberto Gomide, of FAPESP. Then we discussed the structure of names to use within the .br system. Universities, due to their historical participation in the process, could be directly under the .br suffix, resulting in usp.br, unicamp.br, ufmg.br etc. We created gov.br for the government and, below it, the abbreviations for each state, such as sp.gov.br. The .com.br suffix was reserved for future business use, .org.br for non-profit organizations, and .net.br for machines connected to the network infrastructure. Was it a mirror image of the system then used in the United States? Yes. And I think that it was a good idea. Because .com, .net and .org already existed in the United States, and we thought it would be a good idea to keep these three-letter acronyms under the .br suffix. The British use two-letter suffixes: for example, ac.uk for academic, and co.uk for businesses. The suffix .com.br took advantage of the international expansion of .com, in terms of dissemination. After all, if it is a Brazilian company, instead of using .com it uses .com.br. At the time, there were almost no businesses connected, but it was smart to predict that there would be in the future. While at that time almost everything was academic, it spread very fast and everything changed over the course of a few years. In December 1994, Embratel was finally convinced to give Internet access to individuals in Brazil. TCP/IP was still not an official standard, and somewhat underground. But the world was already changing and rapidly adopting TCP/IP, because the family of protocols proposed by the ITU was much more expensive and complicated, and focused on billing, unlike the Internet world. Embratel, convinced by RNP, set up the first Internet access point for Brazilian users in Rio. But its approach was very centralizing: it set up a 0800 number that people could call, and everyone would have an e-mail account at @embratel.net.br. In other words, Embratel would be the only Internet access point for Brazilians. There was an immediate reaction, because those on the academic network thought it was wrong for Embratel to be the "Brazilian Internet" and that this would really limit expansion. So RNP contacted Sergio Motta, the minister of telecommunications at the time. Tadao Takahashi, Ivan Moura Campos and Carlos Afonso, at IBASE, convinced the Minister that a different way forward would be better: set up a hierarchical scheme that would make the Brazilian Internet richer. At the start of 1995, Minister Sergio Motta issued a ministerial decree forbidding Embratel to provide Internet access directly. Embratel would give access to regional telephone companies, which in turn would give access to providers, which would provide Internet access to the end user. Content providers like Folha, Abril, Estadão, JB, etc, [news and print publishers] appeared. Thus, material appeared in Portuguese quickly. It was said that Brazilians would not be interested in the Internet because its content was all in English, but that was easily disproved. The Steering Committee understood that it needed to consolidate the existing structure and delegated registration of names and numbers to the FAPESP team. Then, the CGI also decided we were going to start charging to register domain names, as had just begun in the United States, so that the activity would be self-sustaining. Until then, the Foundation was paying three or four employees in relation to this work, in addition to paying for the international lines. The decision was to charge the equivalent of what was charged in the United States: R$50 for registration and another R$50 per year. In order to keep these funds separate, a procedure was established within FAPESP: the Brazilian Internet Steering Committee research project. The CGI then had funds to use for activities to support the Internet in Brazil. I remember the CGI meeting at FAPESP, in 2000, when Professor Landi [Francisco Landi, former FAPESP director] commented that the Steering Committee project had already lasted five years, and that was the maximum duration of FAPESP projects, so it could no longer continue to manage Brazilian Internet registration. The CGI agreed it was time to seek a solution on its own and switch. The Brazilian registry migrated in 2001 to a building alongside the Pinheiros River, in São Paulo, and a data processing center was built there. And that’s how NIC was created? The CGI DPC was already set up when, in 2002, Ivan Moura Campos, who was the CGI coordinator, concluded that we needed an entity to replace FAPESP, both in terms of potential liability for the actions of the Brazilian registry and to collect and manage fees. Until then, all the payment slips were sent out with the FAPESP Corporate Tax ID, so it also became involved in lawsuits regarding conflicts in the registration of domain names under the .br suffix. That was an additional hassle for the Foundation. It was decided in 2002 that the CGI would create a non-profit NGO, NIC.br. You were born in Italy and became a Brazilian citizen? I was born in the city of Trieste and my family came to Brazil in 1954, when I was one year old. I am a naturalized Brazilian, but I had no previous nationality. I was not Italian, and I became Brazilian. How? I was stateless until I was naturalized in 1976. My father was Greek, my mother was Bulgarian, and my brother was born in Brazil. I was born in 1953 in Trieste, which was still an allied occupation zone after World War II because the Americans left some key cities only later that year. My parents became naturalized Brazilians well before me. When I was finishing up my university studies, I accelerated the naturalization process because obtaining a passport as a stateless person is hell. Did you all come to São Paulo? We came in 1954 and always lived here. My family’s religious background is Greek Orthodox and I studied in a school run by Catholic nuns in the Tatuapé neighborhood, then in a school run by Spanish priests for junior high and high school, and then I attended the Polytechnic School. And what has your academic life as a professor been like? I completed my master’s and PhD [Poli-USP] and then became a professor at the Polytecnic School and taught classes on computer networks there for a few years. I was at FAPESP when the university position opened up. So I became a professor and taught some classes, but since I had no time to do research at the university I thought it would be best to leave. Academically, I have been at PUC-SP since the computer science program was established. I taught undergraduate classes on computer architecture and networks. I also teach in the graduate program in Intelligence Technology and Digital Design, a very interesting interdisciplinary course that just produced its first PhD.” target=”_blank”>see more in Pesquisa FAPESP No. 180]. So instead of asking for a connection of the five machines to the Bitnet, it would be better to create a regional sub-network, like others that were connected to the network. As a name for this subnet, Gomide suggested São Paulo Academic Network, Span, but that name already existed, it belonged to NASA, the Space Physics Analysis Network, and we didn’t know. We had to change it, and reversed the order of the letters: Academic Network at São Paulo, ANSP.

Was it the first in Latin America?
Yes. At the time of the Bitnet, I cannot think of any others. So much so that all of the Bitnet topology in Brazil was defined by FAPESP. Routing on the Bitnet consisted of just one table that described which computers were connected to which machines. This table was updated once a month, to include new machines or alter connections. Not very dynamic routing. To standardize the names a bit, we suggested br be used as a prefix. We had brfapesp, brusp, bruc, at Unicamp, bript etc. The names had only one level, with no “last name,” no dot this or dot that. The folks in Rio and other places also began to use the br prefix: brufmg, brufrgs, brufpe, and the network began to spread that way. An email took hours to arrive, sometimes a day, depending on the size of the dispatch queue. But it was great, because, compared to normal mail, there was no comparison—it was much better.

How did you register .br for Brazil?
The academic network grew, with many protocols and machines. In addition to the Bitnet, we had the HEPNet [High Energy Physics Network], machines connected to UUCP, Fidonet, the Embratel Renpac (x.25), etc. And it was difficult to provide adequate names for the machines. We went after the “last name” .br and requested it be registered, which was done on April 18, 1989 by Jonathan Postel, IANA administrator at the University of Southern California, where the root of the Internet was managed. There was no more formal interaction, except with the people involved in academic networks. Nor was there any intervention of any kind from the American government, or the Brazilian government, or the Foreign Ministry. It was something within the community of those operating academic networks, as was usual on the Internet. Postel thought we had enough maturity to be the focus of the .br suffix and decided to serve the local community, and therefore delegated the .br suffix to the team that operated the FAPESP network.

And the Internet in Brazil?
Well, we were granted the .br suffix on April 18, 1989, but near the end of that year it became clear that the Bitnet had begun to wither, and the Internet, which had far more resources, would end up prevailing and absorb the Bitnet and, probably, the other alternatives as well. The Bitnet was good for email and mailing lists, but it was very limited for interaction and remote access to computers, besides the fact that it was growing much slower. At that time we asked the folks at Fermilab to take us with them when they migrated to the Internet.

So you had seen the movement and knew about the Internet.
In 1989, a backbone was being established for the United States Department of Energy, to which Fermilab was connected, and this backbone, like the NSFNET [of the National Science Foundation], would use TCP/IP and become part of the Internet. Fermilab intended to migrate as soon as possible to this newly created backbone, called ESNet [Energy Sciences Network], and that happened in 1990. As we were connected to them, we worked to implement TCP/IP on the FAPESP machine and, in January 1991, we exchanged the first TCP/IP packets, using a software package that implemented TCP/IP on DEC machines. I can’t remember the precise date, but it was in January, when FAPESP shuts down for vacation. Joseph Moussa [FAPESP DPC employee] was there and we received a tape with the program that would implement TCP/IP. Joseph installed the program, it worked, and the first Internet packets began to arrive at FAPESP.

What was it like at the beginning?
The connection that sustained the Brazilian academic network via RNP belonged to FAPESP, initially a poor 64 Kbps line that later was increased to 128 kbps, then 256 Kbps, and finally to 2 Mbps. I was the coordinator of RNP operations, which were based at FAPESP, and due to organizational issues, we asked Embratel directly for the lines that RNP would use for its backbone. RNP would pay for the domestic lines and equipment allocated at the points in the various states and the Foundation would pay for the international connection. The first RNP backbone was completely designed in a meeting at FAPESP with Michael Stanton, Alexandre Grojsgold of the LNCC, and Alberto Gomide, of FAPESP. Then we discussed the structure of names to use within the .br system. Universities, due to their historical participation in the process, could be directly under the .br suffix, resulting in usp.br, unicamp.br, ufmg.br etc. We created gov.br for the government and, below it, the abbreviations for each state, such as sp.gov.br. The .com.br suffix was reserved for future business use, .org.br for non-profit organizations, and .net.br for machines connected to the network infrastructure.

Was it a mirror image of the system then used in the United States?
Yes. And I think that it was a good idea. Because .com, .net and .org already existed in the United States, and we thought it would be a good idea to keep these three-letter acronyms under the .br suffix. The British use two-letter suffixes: for example, ac.uk for academic, and co.uk for businesses. The suffix .com.br took advantage of the international expansion of .com, in terms of dissemination. After all, if it is a Brazilian company, instead of using .com it uses .com.br. At the time, there were almost no businesses connected, but it was smart to predict that there would be in the future. While at that time almost everything was academic, it spread very fast and everything changed over the course of a few years. In December 1994, Embratel was finally convinced to give Internet access to individuals in Brazil. TCP/IP was still not an official standard, and somewhat underground. But the world was already changing and rapidly adopting TCP/IP, because the family of protocols proposed by the ITU was much more expensive and complicated, and focused on billing, unlike the Internet world. Embratel, convinced by RNP, set up  the first Internet access point for Brazilian users in Rio. But its approach was very centralizing: it set up a 0800 number that people could call, and everyone would have an e-mail account at @embratel.net.br. In other words, Embratel would be the only Internet access point for Brazilians. There was an immediate reaction, because those on the academic network thought it was wrong for Embratel to be the “Brazilian Internet” and that this would really limit expansion. So RNP contacted Sergio Motta, the minister of telecommunications at the time. Tadao Takahashi, Ivan Moura Campos and Carlos Afonso, at IBASE, convinced the Minister that a different way forward would be better: set up a hierarchical scheme that would make the Brazilian Internet richer. At the start of 1995, Minister Sergio Motta issued a ministerial decree forbidding Embratel to provide Internet access directly. Embratel would give access to regional telephone companies, which in turn would give access to providers, which would provide Internet access to the end user. Content providers like Folha, Abril, Estadão, JB, etc, [news and print publishers] appeared. Thus, material appeared in Portuguese quickly. It was said that Brazilians would not be interested in the Internet because its content was all in English, but that was easily disproved. The Steering Committee understood that it needed to consolidate the existing structure and delegated registration of names and numbers to the FAPESP team. Then, the CGI also decided we were going to start charging to register domain names, as had just begun in the United States, so that the activity would be self-sustaining. Until then, the Foundation was paying three or four employees in relation to this work, in addition to paying for the international lines. The decision was to charge the equivalent of what was charged in the United States: R$50 for registration and another R$50 per year. In order to keep these funds separate, a procedure was established within FAPESP: the Brazilian Internet Steering Committee research project. The CGI then had funds to use for activities to support the Internet in Brazil. I remember the CGI meeting at FAPESP, in 2000, when Professor Landi [Francisco Landi, former FAPESP director] commented that the Steering Committee project had already lasted five years, and that was the maximum duration of FAPESP projects, so it could no longer continue to manage Brazilian Internet registration. The CGI agreed it was time to seek a solution on its own and switch. The Brazilian registry migrated in 2001 to a building alongside the Pinheiros River, in São Paulo, and a data processing center was built there.

And that’s how NIC was created?
The CGI DPC was already set up when, in 2002, Ivan Moura Campos, who was the CGI coordinator, concluded that we needed an entity to replace FAPESP, both in terms of potential liability for the actions of the Brazilian registry and to collect and manage fees. Until then, all the payment slips were sent out with the FAPESP Corporate Tax ID, so it also became involved in lawsuits regarding conflicts in the registration of domain names under the .br suffix. That was an additional hassle for the Foundation. It was decided in 2002 that the CGI would create a non-profit NGO, NIC.br.

You were born in Italy and became a Brazilian citizen?
I was born in the city of Trieste and my family came to Brazil in 1954, when I was one year old. I am a naturalized Brazilian, but I had no previous nationality. I was not Italian, and I became Brazilian.

How?
I was stateless until I was naturalized in 1976. My father was Greek, my mother was Bulgarian, and my brother was born in Brazil. I was born in 1953 in Trieste, which was still an allied occupation zone after World War II because the Americans left some key cities only later that year. My parents became naturalized Brazilians well before me. When I was finishing up my university studies, I accelerated the naturalization process because obtaining a passport as a stateless person is hell.

Did you all come to São Paulo?
We came in 1954 and always lived here. My family’s religious background is Greek Orthodox and I studied in a school run by Catholic nuns in the Tatuapé neighborhood, then in a school run by Spanish priests for junior high and high school, and then I attended the Polytechnic School.

And what has your academic life as a professor been like?
I completed my master’s and PhD [Poli-USP] and then became a professor at the Polytecnic School and taught classes on computer networks there for a few years. I was at FAPESP when the university position opened up. So I became a professor and taught some classes, but since I had no time to do research at the university I thought it would be best to leave. Academically, I have been at PUC-SP since the computer science program was established. I taught undergraduate classes on computer architecture and networks. I also teach in the graduate program in Intelligence Technology and Digital Design, a very interesting interdisciplinary course that just produced its first PhD.

Republish