United Nations Efforts for AI Governance

At a time when technological development is accelerating at an unprecedented pace, the international system faces a unique challenge: how can we govern a technology that disregards geographical borders, whose internal mechanisms are not fully understood, yet has the power to reshape the entire international system? Artificial intelligence (AI), with its numerous applications and enormous potential, has become a global governance dilemma that exceeds the capacities of traditional regulatory frameworks.
In response, the United Nations took action on March 21, 2024, by adopting the first global resolution on AI, calling for safe and reliable systems that respect human rights. This was followed in September 2024 by the release of the report “Governing AI for Humanity” by the High-Level Advisory Body on AI, and then, on August 26, 2025, the creation of two new global governance mechanisms: the Independent International Scientific Committee on AI, and the Global Dialogue on AI Governance.
These initiatives reflect a growing recognition that runaway technology requires an organized and comprehensive response; however, they raise a central question: does the United Nations have the practical tools to rein in this technology, or will its efforts remain limited in the face of market forces and intense geopolitical competition between the United States, China, and the European Union?
Regulatory Gap
The gap between rapid technological development and slow regulatory response is one of the most prominent challenges facing the international community in the field of AI. Despite the existence of hundreds of frameworks and guidelines adopted by governments, corporations, and international organizations, these arrangements suffer from fundamental shortcomings: representation, coordination, and implementation.
According to the UN High-Level Advisory Body report, 118 countries, mostly in the Global South, do not participate in any of the major international AI governance initiatives, while only seven countries (Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States) participate in all of them. This widespread exclusion undermines the legitimacy and effectiveness of any global governance framework and risks dividing the world into disparate, incompatible governance systems.
The problem is compounded by the lack of coordination among existing initiatives, both internationally and even within the UN system itself. While agencies such as the International Telecommunication Union, UNESCO, and the Office of the High Commissioner for Human Rights address different aspects of AI, there is no comprehensive framework coordinating these efforts.
The challenge grows more complex considering that most current commitments are voluntary, with no clear mechanisms for accountability or enforcement. This means that ethical discourse around AI, although important, often remains far removed from practical implementation, leaving a dangerous gap between declared ambitions and real-world application.
Divergent Models
Regionally, ambitious attempts have emerged to fill the governance void, most notably the European Union’s AI Act, which came into force on August 1, 2024. This law represents the first globally comprehensive regulatory framework, adopting a risk-based approach: it bans certain “unacceptable” AI systems (such as social scoring systems) and imposes strict requirements on “high-risk” systems (used in employment, education, and criminal justice). However, this pioneering approach faces enormous implementation challenges, including reconciling differing national laws within the EU, ensuring timely compliance by companies, and keeping up with rapid technological evolution that may render some provisions outdated before full application.
In contrast, China adopts a model emphasizing national security and social stability. On September 9, 2024, citing a paper from law firm DLA Piper, China issued the Security Governance Framework for AI via its National Cybersecurity Committee (TC260), and on July 26, 2025, launched its Global AI Governance Action Plan. This approach focuses on monitoring the content generated by generative AI systems, ensuring compliance with “core socialist values,” and addressing misuse and loss-of-control risks, particularly for open-source systems.
The United States takes a more fragmented approach, relying on sector-specific legislation and voluntary industry initiatives, emphasizing competitiveness and innovation. These divergent governance models reflect not only technical differences but fundamental political and value-based contradictions, complicating the possibility of a unified global framework.
Ambitious UN Initiatives
In light of these varied governance models, the UN, in its September 2024 report, proposed a comprehensive AI governance vision based on seven key recommendations aimed at achieving “shared understanding, common ground, and shared benefits.”
- Establish an Independent International Scientific Committee on AI, modeled on the Intergovernmental Panel on Climate Change, to provide impartial scientific assessments of AI capabilities, opportunities, and risks. This committee would issue an annual comprehensive report, quarterly research summaries on priority areas, and emergency reports for emerging risks, addressing the massive information asymmetry between advanced AI laboratories and the rest of the world.
- Launch a Global Political Dialogue on AI Governance, held twice a year alongside the UN General Assembly, involving all countries and stakeholders. This was partially realized in September 2025 with the launch of the Global AI Governance Dialogue, one of the two new mechanisms created by the General Assembly. The second mechanism is the Independent International Scientific Committee on AI, responsible for regular scientific assessments. This dialogue aims to share best practices, enhance alignment across regulatory frameworks, and address cross-border challenges beyond national agencies’ capacities.
- Establish an AI Standards Exchange Platform to address the current explosion of technical standards (from 6 in 2018 to 117 by mid-2024 across ISO, ITU, and IEEE), ensuring a common language for core terms like “fairness,” “transparency,” and “safety.”
4–6. Shared Benefits Proposals:
- A Capacity Development Network connecting regional and national centers of excellence to provide training, expertise, and computing resources for researchers and social entrepreneurs, especially in resource-limited countries.
- A Global AI Fund providing financial and in-kind support (computing resources, models, training data) to countries unable to access these capabilities independently, aiming to reduce the digital divide.
- A Global AI Training Data Framework to address availability, interoperability, and fair use, with focus on privacy protection, cultural and linguistic diversity, and preventing further economic concentration.
- Establish an AI Office within the UN Secretariat, light and flexible, serving as a “bridge” linking all proposed initiatives. Reporting directly to the Secretary-General, it would coordinate UN system-wide AI efforts, engage stakeholders, and advise on emerging issues, leveraging existing capacities across UN agencies rather than creating a heavy bureaucracy.
At the UN General Assembly in September 2025, Nobel Peace laureate Maria Ressa highlighted the AI Red Lines campaign, calling on governments to unite “to prevent globally unacceptable AI risks.” Over 200 prominent politicians and scholars signed, including 10 Nobel laureates.
The Security Council also held an open debate on AI, Peace, and International Security, noting that AI carries both benefits and dangers, is no longer science fiction, and requires immediate international regulatory measures, especially regarding autonomous and nuclear weapons.
Thorny Challenges
Despite these ambitions, UN efforts face deep structural challenges:
- Speed: AI develops far faster than international institutions can respond; by the time a regulatory framework is proposed and enacted, the technology may have already advanced significantly.
- Sovereignty: Major powers, particularly the US and China, view AI as a strategic asset and are reluctant to accept international limits that could weaken their competitive advantage.
- Unequal Power: Advanced AI requires massive computing resources, large datasets, and scarce expertise, concentrated in a few countries and companies. This risks “algorithmic colonialism,” making technologically weaker countries dependent on producers. UN proposals like the Global Fund and Capacity Network attempt to address this, but the challenge is immense, and available resources may be insufficient for meaningful impact.
- Ethical and Legal Issues: A key controversy is AI in lethal autonomous weapons (LAWS). In November 2024, 161 countries voted in favor of a General Assembly resolution on LAWS, expressing concern over emerging military AI technologies. The Secretary-General called for a comprehensive regulatory framework by 2026, including a full ban, but major military powers hesitate to accept binding restrictions, making an effective treaty extremely difficult. In December 2024, the Security Council debated AI in conflicts, but deep geopolitical divisions blocked concrete agreements.
Future Pathways
Several potential paths for international AI governance emerge:
- Multipolar Governance: Independent governance efforts by the EU, China, the US, etc., leading to an interconnected network of frameworks with varying alignment and integration. This is less ambitious than global governance but may be most realistic, aligning with the UN report’s “networked and flexible” approach.
- Governance Fragmentation: Failure of international coordination splits the world into competing technological blocs, each imposing its own standards and values. This scenario resembles a “technological cold war,” risking non-alignment, duplicated efforts, and a “race to the bottom” in safety and ethics.
- International AI Agency: Similar to the IAEA, a more ambitious scenario would create a global agency with strong authority over standards, monitoring, verification, enforcement, and emergency response. This requires huge political consensus, resources, and time, making near-term implementation unlikely.
- Private Sector Dominance: Major tech companies determine AI’s future, with limited government oversight and weak international coordination. While most companies commit to responsible AI development, competitive pressures may override these commitments when business interests conflict.
Conclusion
UN efforts to set boundaries for AI represent a necessary step toward addressing one of the most complex governance challenges of our time. Proposals from the High-Level Advisory Body—from the International Scientific Committee to the Global AI Fund—reflect a deep understanding of the multidimensional challenge. However, success in taming runaway AI depends on factors beyond institutional design: genuine political will from major powers, sustained commitment to multilateralism, sufficient resources to close the digital divide, and effective accountability mechanisms.
Given rising geopolitical tensions, massive economic interests, and rapid technological advancement, the open question remains: can the UN turn its ambitions into reality, or will runaway AI continue to surpass traditional governance capacities? One certainty is that the answer will fundamentally shape the international system in the coming decades, determining whether AI becomes a tool for shared progress or a source of worsening global divides.



