Possibili funzioni di un istituto internazionale per la sicurezza dell’intelligenza artificiale. Un’analisi istituzionale dell’AIEA e dell’IPCC nel contesto delle recenti tendenze sulla governance dell’intelligenza artificiale

Autori

  • Arcangelo Leone De Castris

DOI:

https://doi.org/10.32091/RIID0210

Parole chiave:

Intelligenza artificiale, Governance dell’IA, Sicurezza dell’IA, Etica dell’IA

Abstract

I principali attori coinvolti nella governance dell’intelligenza artificiale (IA) in tutto il mondo concordano sul fatto che se da un lato questa tecnologia promette di produrre enormi benefici di natura economica e sociale, dall’altro è necessario implementare dei processi che ne regolino lo sviluppo e l’utilizzo in modo da mitigare i rischi che essa comporta. Diverse istituzioni internazionali – tra cui l'OCSE, il G7, il G20, l'UNESCO e il Consiglio d'Europa – hanno iniziato a sviluppare quadri di governance per lo sviluppo etico e responsabile dell'IA. Nonostante importanti sviluppi in questo senso, ad oggi non esistono dei processi istituzionalizzati al livello internazionale per identificare, misurare e controllare le capacità potenzialmente dannose dell'IA. Con l’obiettivo di contribuire al dibattito accademico sul tema, questo articolo riflette sull'opportunità di creare un istituto internazionale per la sicurezza dell’IA. Partendo dall'analisi delle istituzioni internazionali che si occupano di sicurezza in aree politiche adiacenti all’IA, nonché degli istituti nazionali per la sicurezza dell’IA recentemente costituiti dal Regno Unito e dagli Stati Uniti, l’articolo propone un elenco di funzioni che potrebbero essere svolte da un istituto internazionale per la sicurezza dell’IA. In particolare, l’articolo suggerisce che le possibili funzioni di tale istituto possono essere articolate all’interno di tre categorie: (a) ricerca e cooperazione, (b) audit e verifiche di conformità dei modelli di IA e (c) supporto alla definizione di quadri di governance dell’IA.

Biografia autore

  • Arcangelo Leone De Castris

    Ricercatore associato presso The Alan Turing Institute, Londra

Riferimenti bibliografici

Y. Afina, P. Lewis (2023), The nuclear governance model won’t work for AI, Chatham House – International Affairs Think Tank, 2023

S. Agrawala (1998-a), Context and Early Origins of the Intergovernmental Panel on Climate Change, in “Climatic Change”, vol. 39, 1998

S. Agrawala (1998-b), Structural and Process History of the Intergovernmental Panel on Climate Change, in “Climatic Change”, vol. 39, 1998

S. Altman, G. Brockman, I. Sutskever (2023), Governance of superintelligence, OpenAI, 2023

Y. Bengio, S. Mindermann, D. Privitera et al. (2024), International Scientific Report on the Safety of Advanced AI: Interim Report, DSIT 2024/009, 2024

B. Bolin (2007), A History of the Science and Politics of Climate Change: The Role of the Intergovernmental Panel on Climate Change, Cambridge University Press, 2007

R. Bommansani, D.A. Hudson, E. Adeli et al. (2022), On the Opportunities and Risks of Foundation Models, arXiv, 2022

A. Booth (2016), EVIDENT Guidance for Reviewing the Evidence: a compendium of methodological literature and websites, Working paper, 2016

R. Brown, J. Kaplow (2014), Talking Peace, Making Weapons: IAEA Technical Cooperation and Nuclear Proliferation, in “Journal of Conflict Resolution”, vol. 58, 2014, n. 3

CMA (2024), CMA AI Strategic Update, UK Competition and Markets Authority, 2024

Council of Europe (2024), Framework Convention on Artificial Intelligence and HumanRights, Democracy and the Rule of Law, Council of Europe Treaty Series - No. 225, 2024

Council of Foreign Relations (2024), Timeline: North Korean Nuclear Negotiations, Council on Foreign Relations, 2024

K. De Pryck, M. Hulme (eds.) (2022), A Critical Assessment of the Intergovernmental Panel on Climate Change, Cambridge University Press, 2022

DSIT (2024-a), Seoul Ministerial Statement for Advancing AI Safety, Innovation, and Inclusivity: AI Seoul Summit 2024, UK Department for Science, Innovation, and Technology, 2024

DSIT (2024-b), Seoul Intent Toward International Cooperation on AI Safety Science, AI Seoul Summit 2024 (Annex), UK Department for Science, Innovation, and Technology, 2024

DSIT (2023), Introducing the AI Safety Institute, UK Department for Science, Innovation, and Technology, 2023

D. Fischer (1997), History of the International Atomic Energy Agency: The First Forty Years, International Atomic Energy Agency, 1997

GPAI (2023), State-of-the-art Foundation AI Models Should be Accompanied by Detection Mechanisms as a Condition of Public Release, Report, Global Partnership on AI, 2023

A. Guterres (2023), Secretary-General Urges Security Council to Ensure Transparency, Accountability, Oversight, in First Debate on Artificial Intelligence, United Nations, 2023

I. Habli (2023), On the Meaning of AI Safety, University of York, 2023

D. Henderson (2007), Unwarranted Trust: A Critique of the IPCC Process, in “Energy & Environment”, vol 18, 2007, n. 7-8

J. Hendrix (2023), Exploring Global Governance of Artificial Intelligence, Tech Policy Press, 2023

L. Ho, J. Barnhart, R. Trager et al. (2023), International Institutions for Advanced AI, arXiv, 2023

IAEA (1956), The Statute of the IAEA, International Atomic Energy Agency, 1956

ICO (2023), Guidance on AI and data protection, UK Information Commissioner’s Office, 2023

A. Jobin, M. Ienca, E. Vayena (2019), The global landscape of AI ethics guidelines, in “Nature”, 2019

C.F. Kerry, J.P. Meltzer, A. Renda et al. (2021), Strengthening International Cooperation on AI, Brookings, 2021

D. Kimball, S. Bugos (2022), Timeline of the Nuclear Nonproliferation Treaty (NPT), Arms Control Association, 2022

J. Krige, J. Sarkar (2018), US technological collaboration for nonproliferation: Key evidence from the Cold War, in “The Nonproliferation Review”, vol. 25, 2018, n. 3-4

D. Leslie (2019), Understanding artificial intelligence ethics and safety, The Alan Turing Institute, 2019

D. Leslie, C. Burr, M. Aitken et al. (2021), AI, human rights, democracy and the rule of law: A primer prepared for the Council of Europe, The Alan Turing Institute, 2021

L. Liu (2023), Letter: Setting rules for AI must avoid regulatory capture by Big Tech, in “Financial Times”, 27 October 2023

M. Mäntymäki, M. Minkkinen, T. Birkstedt, M. Viljanen (2022), Defining organizational AI governance, in “AI and Ethics”, vol. 2, 2022

G. Marcus, A. Reuel (2023), The world needs an international agency for artificial intelligence, say two AI experts, in “The Economist”, 2023

D. Milmo (2023), AI risk must be treated as seriously as climate crisis, says Google DeepMind chief, in “The Guardian”, 24 October 2023

A. Narayanan, S. Kapoor (2024), AI Safety is not a model property, AI Snake Oil, 2024

NIST (2023-a), U.S. Artificial Intelligence Safety Institute, 2023

NIST (2023-b), Artificial Intelligence Risk Management Framework (AI RMF), 2023

OECD (2023), G7 Hiroshima Process on Generative Artificial Intelligence (AI): Towards a G7 Common Understanding on Generative AI, OECD Publishing, 2023

OECD (2019), OECD AI Principles Overview, Organisation for Economic Cooperation and Development, 2019

Prime Minister’s Office (2023), UK to host first global summit on Artificial Intelligence, 2023

H. Roberts, E. Hine, M. Taddeo, L. Floridi (2023), Global AI governance: Barriers and pathways forward, in “International Affairs”, vol. 100, 2023, n. 3

H. Roberts, M. Ziosi, C. Osborne (2023), A Comparative Framework for AI Regulatory Policy, Ceimia, 2023

E. Roehrlich (2022), Inspectors for Peace, Johns Hopkins University Press, 2022

Q. Schiermeier (2010), IPCC flooded by criticism, in “Nature”, vol. 463, 2010

S. Steinmo, K. Thelen, F. Longstreth (eds.) (1992), Structuring Politics: Historical Institutionalism in Comparative Analysis, Cambridge University Press, 1992

I.J. Stewart (2023), Why the IAEA model may not be best for regulating artificial intelligence, in “Bulletin of the Atomic Scientists”, 2023

M. Suleyman, M.-F. Cuéllar, I. Bremmer et al. (2023), Proposal for an International Panel on Artificial Intelligence (AI) Safety (IPAIS): Summary, Carnegie Endowment for International Peace, 27 October 2023

M. Suleyman, E. Schmidt (2023), Mustafa Suleyman and Eric Schmidt: We need an AI equivalent of the IPCC, in “Financial Times”, 2023

The American Presidency Project (2023), FACT SHEET: President Biden Issues Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence, 2023

C. Thomas, H. Roberts, J. Mökander et al. (2024), The case for a broader approach to AI assurance: addressing ‘hidden’ harms in the development of artificial intelligence, in “AI & SOCIETY”, 16 May 2024

UN - AI Advisory Body (2023), Final Report - Governing AI for Humanity, United Nations, 2023

UN - Office for Disarmament Affairs (1970), Treaty on the Non-Proliferation of Nuclear Weapons (NPT), United Nations, 1970

M. Vardy, M. Oppenheimer, N.K. Dubash et al. (2017), The Intergovernmental Panel on Climate Change: Challenges and Opportunities, in “Annual Review of Environment and Resources”, vol. 42, 2017

M. Veale, K. Matus, R. Gorwa (2023), AI and Global Governance: Modalities, Rationales, Tensions, in “Annual Review of Law and Social Sciences”, vol. 19, 2023

L. Weiss (2017), Safeguards and the NPT: Where our current problems began, in “Bulletin of the Atomic Scientists”, vol. 73, 2017

Dowloads

Pubblicato

2025-02-26

Fascicolo

Sezione

Studi e ricerche

Come citare

[1]
De Castris, A.L. 2025. Possibili funzioni di un istituto internazionale per la sicurezza dell’intelligenza artificiale. Un’analisi istituzionale dell’AIEA e dell’IPCC nel contesto delle recenti tendenze sulla governance dell’intelligenza artificiale. Rivista italiana di informatica e diritto. 7, 1 (Feb. 2025), 12. DOI:https://doi.org/10.32091/RIID0210.