American political scientist Allaine Cerwonka, director of international works and partnerships at the Alan Turing Institute, the UK’s national institute for artificial intelligence (AI) and data science, visited Brazil in January to discuss potential collaborations. The Turing Institute is a nonprofit organization funded by the British government and private institutions that leads and carries out research in partnership with universities, government agencies, and businesses in areas such as security, the environment, the economy, and climate change. It is one of the coordinating institutions of the AI Standards Hub, which studies and promotes rules and standards for AI. It also operates at the intersection of technology, social sciences, and public policy, working with authorities in the justice system on tools such as the Online Harms Observatory, a system created to monitor and tackle online harm in the UK, including on social media.
Cerwonka has a PhD in political science from the School of Social Sciences at the University of California, Irvine, and previously served as dean of the School of Social Sciences at the University of East London and founding director of the Science Studies Program at Central European University. She gave the following interview via email and video call.
What was the reason behind your visit to Brazil?
International collaborations and knowledge exchange are fundamental to the UK and our institute. Brazil is particularly interesting to us because its size and population make it an important country for any engagement with Latin America. It is at the center of a number of important international discussions on the potential use of AI to address urgent global challenges—at the G20, for example, held in Rio de Janeiro in November 2024. And in November this year, the country will host COP30 in Belém, Pará. These meetings reflect Brazil’s leadership in issues such as renewable energies and the impact of climate change on the natural world, human security, and health. At the Turing Institute, we are developing AI and new technologies to help address a number of these shared challenges, and we see Brazil as a potential partner in the ambition to address them. The Brazilian government, the private sector, and civil society are engaged in important discussions on how to address regulation and the responsible development of AI—areas of shared concern between our countries.
Which institutions did you talk to?
We visited institutions and government agencies in São Paulo, Brasília, and Rio de Janeiro. We were impressed by the work undertaken by the MCTI [Ministry of Science, Technology, and Innovation], such as the Brazilian artificial intelligence plan, which addresses fundamental issues like data privacy, the ethical and responsible development of AI, and how to regulate AI to protect citizens without stifling innovation. We discussed the two countries’ efforts to improve the delivery of public services through digital government. We also had fruitful discussions with the LNCC [National Laboratory for Scientific Computing] about potential areas of collaboration and researcher exchange. We invited the Brazilian government to send delegates to the Global Summit 2025 – AI Standards Hub, a meeting that will take place in London in March this year. And we had productive discussions with leaders and researchers connected to FAPESP.
What did you discuss?
Going forward, Turing will explore with FAPESP how to make connections between the new AI Research Centers FAPESP has funded and centers of excellence in the UK within Turing’s network of universities. There is also scope to share the institute’s experiences with an innovation hub that FAPESP is setting up together with the government of São Paulo.
What expertise in Brazilian science caught your attention the most?
I was excited about the work in sustainability and the environment. At the Turing Institute, some of our work has to do with the cybersecurity of wind turbines. There was a cyberattack in Germany last year that disabled thousands of turbines in one swoop. We cannot rely on these renewable energies without considering security issues as well. Brazil is an important partner in this work due to its strength in renewable energies. Turing will continue talking with the port of Açu, in Rio de Janeiro, about autonomous ships and offshore wind turbines. In the field of AI for the environment, we were excited by the work of INPO [National Institute for Oceanic Research, a private research institution based in Rio de Janeiro] on creating a digital twin [a virtual model of a physical system or object used for simulations] of the South Atlantic.
Potential AI risks must be managed via continuous reflection throughout the research process
Turing has led a program to implement AI and data science in priority areas across the UK. What were the most significant results?
The program, funded by UK Research & Innovation [UKRI, the UK’s leading funding agency], was responsible for around 100 projects, addressing the most important areas for the public and the economy. We produced strong work on digital twins, for example, working with Rolls-Royce to increase efficiency in the aerospace industry. An example of our work in the health sector is the SPARRA project [Scottish Patients at Risk of Re-Admission], which uses AI modeling to predict the likelihood of cardiac patients returning to hospital based on data from the Scottish health system. For each area we worked in with the government we developed white papers, used to brief members of parliament and relevant sectors of the civil service. We have around 75 in-house researchers who work mainly on issues such as security and defense, where a lot of the work is classified. For less sensitive issues, we are able to engage the best researchers from universities.
What are the biggest challenges today in developing ethical and responsible AI applications?
That is a very important question. Unfortunately, there is no fixed set of rules for producing ethical, responsible AI that professionals simply need to memorize in order to avoid doing harm with their research. At Turing, we believe that potential risks in AI research and output must be managed through continuous reflection throughout the research process. We have produced a handbook titled The Turing Way that seeks to address these challenges and help apply the goals and practices of reproducible research to the relatively young field of machine learning. We have also created a protocol and training course for ethical, responsible AI called The Turing Commons. The course was developed to help train a new generation of AI researchers in identifying and building ethical frameworks for AI applications.
How can we balance AI regulation to ensure rights are protected without compromising innovation?
The UK has adopted a “pro-innovation” approach to AI regulation, described and justified in a government white paper, which seeks a middle ground between the European Union’s AI Act and the approach currently taken by the USA. Recognizing that AI technologies are developing at a rapid pace, the UK government has tasked its existing regulatory bodies with developing standards and regulations within their own sectors. The Turing Institute has been a part of this process in collaboration with the National Physical Laboratory and the British Standards Institution, contributing to the development of the AI Standards Hub, which is designed to facilitate the representation of different parts of the ecosystem in the development of AI standards. The Hub has also created an online platform to identify all relevant regulations and standards for a given technology. This is crucial work for ensuring clarity for industry so they can innovate with confidence. The UK government is keen to deploy AI and other emerging technologies to increase the efficiency and standards of public services. Thus far, the UK has digitized a great deal of its services, both at the national and local levels. Naturally, the standard of accountability for government use of AI must be higher than in other sectors.
