{"id":516207,"date":"2024-07-31T14:32:05","date_gmt":"2024-07-31T17:32:05","guid":{"rendered":"https:\/\/revistapesquisa.fapesp.br\/?p=516207"},"modified":"2024-07-31T14:32:05","modified_gmt":"2024-07-31T17:32:05","slug":"ai-systems-could-monitor-animals-crossing-highways","status":"publish","type":"post","link":"https:\/\/revistapesquisa.fapesp.br\/en\/ai-systems-could-monitor-animals-crossing-highways\/","title":{"rendered":"AI systems could monitor animals crossing highways"},"content":{"rendered":"<p>Computer systems that use artificial intelligence (AI) to detect moving objects can be adapted and trained to identify animals crossing Brazilian roads. These adapted AI systems could be installed in roadside devices to issue almost immediate alerts when animals are spotted on the highway, in addition to automatically classifying which species are most frequently hit by vehicles.<\/p>\n<p>These are the main findings of a study led by scientists from the S\u00e3o Carlos Institute of Mathematics and Computer Science at the University of S\u00e3o Paulo (ICMC-USP), who analyzed the performance of 14 systems based on the You Only Look Once (YOLO) algorithm, which is used to identify and delimit the location of specific objects in an image or video (in this case, animals). The study was published in the journal <em>Scientific Reports <\/em>in January.<\/p>\n<p>None of the systems performed perfectly when tasked with analyzing images of five wild animal species they had been trained to recognize: the tapir, jaguarundi (wild cats), maned wolf, cougar, and giant anteater. Some, however, such as Scaled-YOLOv4, achieved a performance of greater than 85% in most situations. \u201cComparative studies are essential to determining the response time needed for these systems to work efficiently on the roads, a scenario that involves high-speed vehicles, and to evaluate the feasibility of their implementation,\u201d says computer scientist Rodolfo Meneguette, head of the research group.<\/p>\n<p>The tests were carried out on tiny, simple Raspberry Pi 4 computers, which weigh around 50 grams. Because they are so small and inexpensive, this type of device could theoretically be installed on existing roadside devices with a Wi-Fi internet connection. The microcomputer would analyze and classify the images locally, transmitting only its verdict (whether or not there is an animal on the road) to a cloud-based system. The external structure would then trigger some form of warning almost immediately to drivers traveling on the same stretch of road.<\/p>\n<p>According to estimates by the Brazilian Center for Road Ecology (CBEE), linked to the Federal University of Lavras (UFLA) in Minas Gerais, approximately five million large animals are killed on Brazilian roads every year, including capybaras, jaguars, monkeys, and maned wolves.<\/p>\n<p>To train the YOLO models to recognize these five specific animals, the researchers created a database called BRA-Dataset, containing 1,458 images of the species. The database was populated from free online sources found using the Google Images search engine. The team used videos they themselves recorded at the S\u00e3o Carlos Ecological Park, in addition to free footage found online, to test how quickly the systems could recognize the animals.<\/p>\n<div id=\"attachment_516208\" style=\"max-width: 810px\" class=\"wp-caption alignright\"><img loading=\"lazy\" decoding=\"async\" class=\"wp-image-516208 size-full\" src=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800.jpg\" alt=\"\" width=\"800\" height=\"1569\" srcset=\"https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800.jpg 800w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800-250x490.jpg 250w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800-700x1373.jpg 700w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800-783x1536.jpg 783w, https:\/\/revistapesquisa.fapesp.br\/wp-content\/uploads\/2024\/05\/RPF-ia-animais-travessia-2024-03-800-120x235.jpg 120w\" sizes=\"auto, (max-width: 800px) 100vw, 800px\" \/><p class=\"wp-caption-text\"><span class=\"media-credits-inline\">BRA-Dataset<\/span>Screenshots provided by three systems designed to identify and classify wild animals in images<span class=\"media-credits\">BRA-Dataset<\/span><\/p><\/div>\n<p>The YOLO architecture combines image processing and AI to form convolutional neural networks, which are widely used in the field of computer vision. \u201cThis approach allows the machine, when receiving new images or videos, to compare the learned characteristics against predefined classes,\u201d explains computer scientist Gabriel Ferrante, lead author of the article, who defended his master&#8217;s thesis on the topic at ICMC-USP in 2023, supervised by Meneguette.<\/p>\n<p>The neural network divides a still or moving image (video) into smaller parts, as sets of pixels (points) that are transformed into numerical data. Through mathematical and probabilistic calculations, this data is used to classify what type of object appears in the image and its location, to a given degree of certainty.<\/p>\n<p>The images accompanying this article show the kinds of results provided by the YOLO systems when looking for animals on roads. They drew boxes around the recognized species and classified them as belonging to one of the five classes they had learned to recognize. At the end of the process, the name of the animal recognized by the model appears in the image, followed by a number between 0 and 1. The expression \u201canta 0.90,\u201d for example, means the system is 90% certain that the object identified in the image belongs to this class.<\/p>\n<p>\u201cWe tested different models based on the YOLO architecture to try to see if one could be ideal for specific contexts,\u201d explains computer scientist Lu\u00eds Nakamura of the Federal Institute of S\u00e3o Paulo (IFSP) Catanduva campus, coauthor of the article. Despite being trained, the systems were inaccurate at distinguishing animals in more challenging scenarios, such as when they were hidden by other objects, camouflaged against the landscape, or very far from the camera.<\/p>\n<p>\u201cTo understand the pixel patterns in an image, convolutional neural networks scan parts of it in sequence,\u201d explains Ferrante. \u201cIf the environment interferes with the recognition of important visual characteristics, such as edges, textures, and colors, the software has difficulty classifying and defining the area of a possible object.\u201d<\/p>\n<p>Systems designed to analyze images taken in daylight are not suitable for night surveillance or in low visibility conditions. In these cases, the use of infrared cameras that are capable of \u201cseeing\u201d in the dark could work as an alternative. This approach, however, was not tested in the study.<\/p>\n<p>Data scientist Alexandre de Siqueira, who was not involved in the research, believes future studies could expand the number of animal species included in the database used to train the systems. \u201cIf this technology were installed in static cameras, it could even be used to observe species migrating between different regions of the country, for example,\u201d says Siqueira, who worked at the Berkeley Institute for Data Science (BIDS) of the University of California between 2019 and 2022. \u201cIt is also important to test networks with models other than YOLO to assess which is the fastest or cheapest, depending on the purpose of the application.\u201d<\/p>\n<p class=\"bibliografia separador-bibliografia\"><strong>Projects<br \/>\n1.<\/strong> Services for an intelligent transportation system (<a href=\"https:\/\/bv.fapesp.br\/pt\/auxilios\/107733\/servicos-para-um-sistema-de-transporte-inteligente\/?q=20\/07162-0\" target=\"_blank\" rel=\"noopener\">n\u00ba 20\/07162-0<\/a>); <strong>Grant Mechanism<\/strong> Regular Research Grant; <strong>Principal Investigator<\/strong> Rodolfo Meneguette (USP);<strong> Investment<\/strong> R$146,438.83.<br \/>\n<strong>2.<\/strong> Dynamic resource management for intelligent transportation system applications (<a href=\"https:\/\/bv.fapesp.br\/pt\/auxilios\/110015\/gerenciamento-de-recursos-dinamicos-para-aplicativos-de-sistema-de-transporte-inteligente\/\" target=\"_blank\" rel=\"noopener\">n\u00ba 22\/00660-0<\/a>); <strong>Grant Mechanism<\/strong> Regular Research Grant; Sprint Program; <strong>Agreement<\/strong> University of Manchester; <strong>Principal Investigator<\/strong> Rodolfo Meneguette (USP); <strong>Investment<\/strong> R$64,930.00.<\/p>\n<p class=\"bibliografia\"><strong>Scientific article<br \/>\n<\/strong>FERRANTE, J. S. <em>et al<\/em>. <a href=\"https:\/\/www.nature.com\/articles\/s41598-024-52054-y\" target=\"_blank\" rel=\"noopener\">Evaluating Yolo architectures for detecting road killed endangered Brazilian animals.<\/a> <strong>Scientific Reports<\/strong>. Jan. 16, 2024.<\/p>\n","protected":false},"excerpt":{"rendered":"Study tested the performance of computer vision models","protected":false},"author":719,"featured_media":516212,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"_acf_changed":false,"_exactmetrics_skip_tracking":false,"_exactmetrics_sitenote_active":false,"_exactmetrics_sitenote_note":"","_exactmetrics_sitenote_category":0,"footnotes":""},"categories":[159],"tags":[219],"coauthors":[4223],"class_list":["post-516207","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-science","tag-computation"],"acf":[],"_links":{"self":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/516207","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/users\/719"}],"replies":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/comments?post=516207"}],"version-history":[{"count":1,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/516207\/revisions"}],"predecessor-version":[{"id":516220,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/posts\/516207\/revisions\/516220"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media\/516212"}],"wp:attachment":[{"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/media?parent=516207"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/categories?post=516207"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/tags?post=516207"},{"taxonomy":"author","embeddable":true,"href":"https:\/\/revistapesquisa.fapesp.br\/en\/wp-json\/wp\/v2\/coauthors?post=516207"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}