{"id":469424,"date":"2023-03-15T10:09:22","date_gmt":"2023-03-15T13:09:22","guid":{"rendered":"https:\/\/revistapesquisa.fapesp.br\/?p=469424"},"modified":"2023-03-15T10:09:22","modified_gmt":"2023-03-15T13:09:22","slug":"deepfake","status":"publish","type":"post","link":"https:\/\/revistapesquisa.fapesp.br\/en\/deepfake\/","title":{"rendered":"Deepfake"},"content":{"rendered":"<p>In September, a doctored version of a clip from <em>Jornal Nacional<\/em>, the biggest news program on Brazil\u2019s Globo television network, was shared widely on social media. The video showed anchors William Bonner and Renata Vasconcellos announcing the results of a poll on voter intentions for the upcoming presidential election, but the data on who was the preferred candidate was reversed, both in the graphics and in the words of the presenters. The next day, the show itself issued a warning that the video was a deepfake\u2014where artificial intelligence (AI) is used to make highly convincing alterations\u2014and was being used to misinform the population. The technology can be used to digitally imitate a person&#8217;s face or simulate their voice, making them appear to do things they did not do or say things they did not say.<\/p>\n<p>In August, another similarly altered video from the show, once again inverting the results of a presidential election poll, was posted on TikTok, where it was viewed 2.5 million times according to the Comprova Project, a fact-checking group of journalists from 43 media outlets in Brazil.<\/p>\n<p>\u201cIt could be deepfake technology that was used in these videos, but a more detailed analysis is needed. For us, what is important is knowing that they are fake,\u201d says computer scientist Anderson Rocha, director of the Computing Institute at the University of Campinas (UNICAMP), where he is head of the Artificial Intelligence Laboratory (Recod.ai). Rocha has been studying ways to detect malicious manipulation of photos and videos\u2014also known as synthetic media\u2014including deepfakes.<\/p>\n<p>In March, shortly after Russia began its war against Ukraine, Ukrainian President Volodymyr Zelensky was the victim of a deepfake. A video circulated on social media in which he appeared to urge Ukrainians to lay down their weapons and return to their homes, suggesting the country was surrendering. Facebook and YouTube removed the video as soon as it became apparent that it was fake. In the video, the president&#8217;s face appeared on a near-motionless body wearing a green T-shirt.<\/p>\n<p>When the original is readily available for comparison, such as with the <em>Jornal Nacional <\/em>examples, it is fairly simple to verify that a video has been doctored. But this is not always the case. Synthetic media is stripping the phrase \u201cseeing is believing\u201d of its meaning, and AI itself can be an ally.<\/p>\n<p>\u201cUsually, synthetic videos are made in two stages: first, a deepfake platform is used to swap faces or synchronize mouth movements, and then they are edited using editing software,\u201d explains Rocha. Those who know what to look for can usually detect flaws in the program used to produce the fake video, such inconsistent lighting or differences in contrast between the original video and the newly added face.<\/p>\n<p>It is like cutting a face out from one photo and sticking it onto another: the way the light falls and how the camera captured the two images are different. These small disparities serve as clues identifiable by computer forensics techniques, an area of research that has grown in recent years and with which Rocha is familiar.<\/p>\n<blockquote><p>A person&#8217;s face or voice can be imitated, making them appear to say things they did not actually say<\/p><\/blockquote>\n<p>Together with colleagues from the University of Hong Kong, he developed an algorithm that helps to detect whether the faces in a video have been manipulated and if so, which regions have been changed. The program can determine, for example, whether the whole face has been doctored, or just the mouth, eyes, or hair. \u201cIt was correct 88% of the time for low-resolution videos and 95% of the time for videos with a higher resolution,\u201d explains Rocha, after the team tested the software on 112,000 faces: half real and half manipulated by four deepfake programs. It can also indicate whether an image was created from scratch, rather than edited from an existing photograph. The results were published in the journal <em>Transactions on Information Forensics and Security<\/em> in April 2022.<\/p>\n<p>According to the computer scientist, other software has been developed that can detect evidence that videos are deepfakes, but they mostly work by identifying clues left by well-known manipulation programs, which can be divided into two categories: those used to swap faces and those used to edit facial expressions. One platform is known to leave certain imperfections when synchronizing mouths\u2014detection algorithms are then programmed to look for that specific error. \u201cThere is a problem with this: if we do not know what deepfake software was used, it becomes much more difficult to identify these traits. And new applications are constantly being developed,\u201d points out Rocha.<\/p>\n<p>He and his colleagues thus trained their algorithm to detect clues without assuming any knowledge of the deepfake generator used. \u201cWe worked from the idea that regardless of the program, some noise will be left behind, something that is not consistent with the rest of the image.\u201d The software operates on two fronts: it looks for noise signatures\u2014subtle changes around the edge of the face, for example\u2014and it determines a semantic signature, which can be a flaw in the color, texture, or shape.<\/p>\n<p>\u201cThe algorithm automates the procedure a human expert would carry out, looking for inconsistencies, such as discrepancies in contrast,\u201d he says. \u201cThe next step is to test it with fake videos generated by a larger number of programs, to confirm its potential.\u201d<\/p>\n<p>This type of algorithm can be used for various purposes related to combating the malicious use of deepfakes. Rocha is part of an international program created by the US Department of Defense called Semantic Forensics, alongside researchers from the University of Siena and the Polytechnic University of Milan in Italy and the University of Notre Dame in the USA. The objective is to develop tools that automatically detect video and image manipulation. \u201cWe have already seen cases of doctored videos of military exercises in other countries that have multiplied the number of missiles to show greater military power,\u201d he says.<\/p>\n<p>These algorithms can also help identify political deepfakes, like the case of the Ukrainian president, or even pornographic ones. It was use of the technology in this area that earned it fame at the end of 2017, when internet users began putting the faces of hollywood celebrities onto the bodies of actors in pornographic movies. According to a September 2019 survey by Dutch cybersecurity company DeepTrace Labs, 96% of deepfake videos online were nonconsensual pornography. Most victims were women, primarily actresses, but there were also reports of cases involving people who were not famous. In July of this year, Brazilian pop star Anitta was also the victim of a pornographic deepfake. The original video had already been used to produce a deepfake with the face of actress Angelina Jolie.<\/p>\n<p>According to Cristina Tard\u00e1guila, programs director at the International Center for Journalists (ICFJ) and founder of fact-checking specialists Ag\u00eancia Lupa, Brazil has already had to expose the truth behind several deepfakes. Programs that help detect synthetic media automatically can thus be valuable aids for journalists and fact-checkers working against the clock. \u201cWhen it comes to misinformation, you have to respond quickly. It is important to invest in AI and tools that can help detect and identify this type of fake content as quickly as possible. That way we can shorten the time between false content being shared and a check being made,\u201d she explains.<\/p>\n<div id=\"attachment_469429\" style=\"max-width: 1210px\" class=\"wp-caption alignnone\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-469429 size-full\" src=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2023\/02\/SITE_DeepFake-2-1140.jpg\" alt=\"\" width=\"1200\" height=\"300\" srcset=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2023\/02\/SITE_DeepFake-2-1140.jpg 1200w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2023\/02\/SITE_DeepFake-2-1140-250x63.jpg 250w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2023\/02\/SITE_DeepFake-2-1140-700x175.jpg 700w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2023\/02\/SITE_DeepFake-2-1140-120x30.jpg 120w\" sizes=\"auto, (max-width: 1200px) 100vw, 1200px\" \/><p class=\"wp-caption-text\"><span class=\"media-credits-inline\">Reproduction<\/span>In one fake video, Ukrainian President Volodymyr Zelensky appeared to urge his compatriots to lay down their weapons<span class=\"media-credits\">Reproduction<\/span><\/p><\/div>\n<p>\u201cDeepfakes are the pinnacle of fake news. They can deceive people more easily because viewers believe they are watching something that really happened. The audio can also be generated synthetically,&#8221; says journalist Magaly Prado, who is doing a postdoctoral fellowship at the Institute for Advanced Studies of the University of S\u00e3o Paulo (IEA-USP) and wrote the book <em>Fake news e intelig\u00eancia artificial: O poder dos algoritmos na guerra da desinforma\u00e7\u00e3o <\/em>(Fake news and artificial intelligence: The power of algorithms in the disinformation war), released by Edi\u00e7\u00f5es 70 in July.<\/p>\n<p>She emphasizes that despite being less well-remembered and less common, deepfake audio files can be spread easily on platforms such as WhatsApp, which is widely used by Brazilians. They are made using a similar method to the videos: with accessible software that keeps getting better, it is possible to simulate a person\u2019s voice. The easiest victims are public figures, whose voices are easily available online. The technique can also be used for financial scams. \u201cIn one case, an employee of a technology company received a voice message from a top executive asking him to transfer some money to him. He was suspicious, and the message was analyzed by a security company, which verified that it was constructed using artificial intelligence,\u201d Prado says.<\/p>\n<p>Bruno Sartori, director of FaceFactory, explains that producing well-made deepfakes, whether audio or video, is not simple\u2014yet. His company creates synthetic media for commercial use and provides content for comedy shows on television channels Globo and SBT.<\/p>\n<p>In 2021, he worked on a commercial for Samsung in which the presenter, Ma\u00edsa, interacted with herself as a child. The latter was created using deepfake technology. The virtual girl dances, plays, and throws a laptop in the air. On another occasion, he had to put an actor&#8217;s face on the body of a stunt double. \u201cTo train the AI well, you need a big database of images and audio of the person you want to imitate. Good programs that offer high-quality processing also need to have advanced settings, otherwise there can be visible flaws on the face, or with audio files, a robotic sounding voice,\u201d he explains.<\/p>\n<p>Sartori does not believe the manipulated <em>Jornal Nacional<\/em> videos with the altered poll data were altered using AI. \u201cIn my analysis, the creators used traditional editing techniques, cutting and reversing the order of the audio. This is known as a shallowfake<em>.<\/em> But if it is done well, it has just as much potential to deceive people,\u201d he stresses. He points out that these programs will probably become lighter, smarter, and more accessible over the coming years.<\/p>\n<p>There are some ways a person can protect themself from misinformation created with the aid of technology. One is to pay attention to the fair use and privacy terms of the many free apps used in everyday life\u2014from those that ask for access to a user&#8217;s photos to add fun effects, to those that can store recordings of a user\u2019s voice. According to UNICAMP\u2019s Rocha, many apps store a large amount of data that can be shared for other purposes, such as training deepfake software.<\/p>\n<p>Another important point is media awareness. \u201cWhile software can help us highlight fake media, the first step is to be suspicious of everything we receive on social networks. And check the sources of this information, research them,\u201d he concludes.<\/p>\n<p class=\"bibliografia separador-bibliografia\"><strong>Project<\/strong><br \/>\nD\u00e9j\u00e0 vu: Coherence of the time, space, and characteristics of heterogeneous data for integrity analysis and interpretation (<a href=\"https:\/\/bv.fapesp.br\/pt\/auxilios\/98398\/deja-vu-coerencia-temporal-espacial-e-de-caracterizacao-de-dados-heterogeneos-para-analise-e-interpr\/\" target=\"_blank\" rel=\"noopener\">n\u00ba 17\/12646-3<\/a>); <strong>Grant Mechanism<\/strong> Thematic Project; <strong>Principal Investigator<\/strong> Anderson Rocha; <strong>Investment <\/strong>R$1,912,168.25.<\/p>\n<p class=\"bibliografia\"><strong>Scientific article<\/strong><br \/>\nKONG, C. <em>et al<\/em>. <a href=\"https:\/\/ieeexplore.ieee.org\/abstract\/document\/9764682\" target=\"_blank\" rel=\"noopener\">Detect and locate: Exposing face manipulation by semantic- and noise-level telltales<\/a>. <strong>Transactions on Information Forensics and Security<\/strong>. vol. 17. apr. 2022.<\/p>\n<p class=\"bibliografia\"><strong>Book<\/strong><br \/>\nPRADO, M. <strong>Fake news e intelig\u00eancia artificial: O poder dos algoritmos na guerra da desinforma\u00e7\u00e3o<\/strong>. S\u00e3o Paulo: Edi\u00e7\u00f5es 70, 2022.<\/p>\n","protected":false},"excerpt":{"rendered":"Algorithm detects images and videos altered by artificial intelligence, the new technological approach to spreading disinformation","protected":false},"author":684,"featured_media":469425,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"categories":[169],"tags":[220,264],"coauthors":[2721],"class_list":["post-469424","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-technology","tag-communication","tag-information-technology","keywords-deepfake-en"],"acf":[],"_links":{"self":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/469424","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/users\/684"}],"replies":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/comments?post=469424"}],"version-history":[{"count":3,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/469424\/revisions"}],"predecessor-version":[{"id":469435,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/469424\/revisions\/469435"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media\/469425"}],"wp:attachment":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media?parent=469424"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/categories?post=469424"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/tags?post=469424"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/coauthors?post=469424"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}