Algorithms are everywhere. When share prices rise and fall, algorithms are typically involved. According to data that were released in 2016 by the Institute for Applied Economic Research (IPEA), investment robots that are programmed to instantly react to specified scenarios account for more than 40% of stock market transactions in Brazil. In the United States, this figure is 70%. The success of a simple Google search depends on these computer programming procedures, which can filter billions of web pages in mere seconds; the importance of a website, as defined by an algorithm, is based on the quantity and quality of other pages that link to it. At the frontier of automotive engineering research, sets of algorithms are used by autonomous cars to process information that has been captured by cameras and sensors to instantly make decisions at the wheel without human intervention.
Although they play a role in even the most mundane tasks, such as traffic avoidance via mobile applications, algorithms are often viewed as intangible by the general population, who feel their effects but do not know or understand what they are or how they work. An algorithm is nothing more than a sequence of steps that are used to automatically solve a problem or accomplish a task, regardless of whether a dozen or a million lines of programming code are required. “It is the nucleus of any computational process,” says computer scientist Roberto Marcondes Cesar Junior, who is a researcher at the Institute of Mathematics and Statistics of the University of São Paulo (IME-USP).
Consider the sequence of steps that are performed by the Facebook algorithm, for example. The choice of what to display in a user’s news feed is based primarily on the set of posts that have been produced by or are circulating among the user’s friends. The algorithm analyzes this information and discards posts that have been flagged as violent or inappropriate, posts that appear to be spam, and posts in which the wording is identified as “clickbait”—a form of exaggeration that is used to encourage users to click a link. Finally, the algorithm assigns a score to each post that is based on the user’s activity history and estimates how likely the user is to enjoy or share the information. The algorithm has recently been modified to reduce the reach of posts that have been made by news outlets.
The development of an algorithm involves three steps (see the infographic): The first is to accurately identify the problem and find a solution to it. In this phase, computer programmers work with professionals who understand the task that must be performed. They could be doctors, in the case of an algorithm that analyzes imaging exams; sociologists, if the objective is to identify patterns of violence in regions of a city; or psychologists and demographers in the development of a dating application. “The challenge is to show that a practical solution to the problem exists, that it is not a problem of exponential complexity, for which the time needed to produce a response can increase exponentially, making it impractical,” explains computer scientist Jayme Szwarcfiter, who is a researcher at the Federal University of Rio de Janeiro (UFRJ).
The second phase is also free of mathematical operations: it consists of describing the sequence of steps in normal language, for everyone to understand. Next, this description is translated into a programming language during phase three. Only then can the computer understand the commands—which can be simple, mathematical operations or complex algorithms within algorithms—all in a logical and precise sequence. During this stage, programmers are tasked with writing the algorithms. On complex projects, large teams of programmers work together and share tasks.
Robots are responsible for 40% of the decisions that are made on the Brazilian stock market
At their origin, algorithms are logical systems that are as old as mathematics. “The expression comes from a Latinization of the name of Persian mathematician and astronomer Mohamed al-Khwarizmi, who produced famous works on algebra in the ninth century,” explains computer scientist Cristina Gomes Fernandes, who is a professor at IME-USP. They gained new impetus in the second half of the last century alongside the development of the computer, with which it was possible to create work routines for the machines. There are two reasons why algorithms are now so widely used in the real world and why they have become the basis of most complex software development: First, the increased processing power of computers has increased the speed at which complex tasks can be executed. Second, the advent of big data has made it cheaper to collect and store huge amounts of information, thereby enabling algorithms to identify patterns that are imperceptible to the human eye in a wide range of scenarios. Advanced manufacturing, which is known as Industry 4.0, promises to increase productivity by using artificial intelligence algorithms to monitor industrial plants in real time and make decisions on stock control, logistics, and maintenance.
One effect of the growing use of algorithms in computing was a boost to artificial intelligence, which is a field that was established in the 1950s and aims at developing mechanisms that are capable of simulating human reasoning. Through increasingly fast computations and the collection of data for statistical comparisons, computers can now modify their operations based on accumulated experience, thereby improving their performance in a process that mimics learning.
Computers have proven capable of beating humans in many board games; this demonstrates how the field has evolved. In 1997, IBM’s Deep Blue supercomputer succeeded for the first time in beating the world chess champion of the time, Gary Kasparov, who was from Russia. Capable of simulating approximately 200 million chess positions per second, the machine anticipated its opponent’s decisions several moves ahead. However, this strategy was unsuccessful for Go, which is a Chinese board game, because there are too many possible moves at any time to anticipate—the number of possibilities exceeds the number of atoms in the universe. In March 2016, Go was finally defeated: the AlphaGo program, which was created by DeepMind, which is a subsidiary of Google, beat world champion Lee Sedol, who was from South Korea.
Instead of considering millions of possibilities, the program’s algorithm used a more restricted strategy: By statistically analyzing data from previous matches between the game’s best players, the program identified the most common and efficient moves, thereby resulting in a smaller set of variables, and was soon able to beat the human players. However, there was more to come. Last year, DeepMind developed a new program, namely, AlphaGo Zero, which outperformed the original AlphaGo. In the new program, the machine did not learn from humans, but from the previous versions of the program.
There are a growing number of practical applications for this type of technology. Artificial intelligence algorithms that were developed by computer scientist Anderson de Rezende Rocha, who is a professor at the Institute of Computing of the University of Campinas (UNICAMP), have been used to facilitate police investigations. Rocha specializes in computer forensics and creates artificial intelligence tools for detecting subtle details in digital documents that are often imperceptible to the naked eye. “The technology can help the experts confirm that a particular photograph or video related to a crime is genuine, for example,” says Rocha.
One scenario in which the algorithms are being used is to automate investigations into images of child abuse. Police regularly seize large volumes of photographs and videos from the computers of suspects. If there are files that are related to child abuse, the algorithm helps find them. “We exposed the robot to hours of pornographic videos from the internet to teach it what pornography is,” says Rocha. Then, to identify the presence of children, the algorithm needed to “watch” the videos of child abuse that were seized by the police. “This stage was carried out by police officers. Nobody at UNICAMP had access to this material,” he adds. Rocha says that these types of files were previously analyzed manually in most cases. “Automating the process makes it more efficient, giving the police more time and allowing them to examine more data.”
Programmers should be aware of the implications of their work, says Nick Seaver, from Tufts University
Many computer scientists use mathematical properties, theorems, and logic when working on algorithms, regardless of the immediate purpose of the application. In many scenarios, the only known algorithms are highly inefficient and do not perform well with large data volumes, for example, in the factorization of a number into its constituent primes (which is highly important in cryptography) or routing a welding robot through several weld points. There is little hope that efficient algorithms will be identified for these applications, which fall under unsolved problem of “P versus NP,” which is considered one of the greatest challenges in both computer science and mathematics.
Although there is more programming involved than basic science in the development of many of the algorithms that are used in everyday life, advances in knowledge are essential if new applications are to be explored in the future. Marcondes Cesar, who is from USP, is working on computer vision, which is a type of artificial intelligence that extracts information from images to simulate human vision. The technique is being explored in various industries, particularly in medical diagnoses. “Computer vision can detect anomalies more accurately and evaluate subtle details in magnetic resonance imaging, for example.”
The objective of the project, which is being carried out in partnership with the USP School of Medicine and the Children’s Institute of the university’s teaching hospital, is to develop a mathematical model that can provide a more accurate analysis of the liver and brain in newborns. The models that are used to interpret magnetic resonance images are typically based on white adult males and have been developed in other countries, which can lead to inaccurate diagnoses in newborn babies in Brazil. However, the project’s success depends on several theoretical problems being solved first. “We do not yet know if we will be able to write an efficient algorithm. We are still studying properties based on graph theory,” he says, referring to the branch of mathematics in which the relations between objects of a specified set are studied by associating them to one another via structures that are called graphs.
The impact of algorithms has also been analyzed in other fields of knowledge. “Algorithms are already playing a moderating role. Google, Facebook, and Amazon have an extraordinary amount of power over what we are exposed to in culture today,” said Ted Striphas, who is a professor of the history of culture and technology at the University of Colorado, USA, and author of the book Algorithmic Culture (2015), which examines the influence of these online giants. American anthropologist Nick Seaver, who is a researcher at Tufts University, USA, is currently conducting ethnographic research and interviews with the creators of music recommendation algorithms for streaming services. His interest is in how these systems are designed to attract users and draw their attention and he is studying the interface between areas such as machine learning and online advertising. “The mechanisms that control attention and its technical mediations have become a subject of great interest. The formation of interest and opinion bubbles, as well as fake news, and political distractions, can be attributed to technologies designed to manipulate user attention,” he explains.
Recommendation systems that are based on algorithms have become key players in the online entertainment industry. In an article that was published in the journal ACM Transactions on Management Information Systems in 2015, Mexican electronic engineer Carlos Gomez-Uribe described how the algorithms that are used by streaming service Netflix rank television series and movies according to the individual profile of each user. The objective is to encourage customers to select a TV show to watch within 90 seconds of logging on—any longer than that and they tend to get frustrated and lose interest. The success of this ranking system gave Gomez-Uribe’s career a boost and in 2017, he became head of algorithms and internet technology products at Facebook.