AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

Directory

Global AI Governance Frameworks

Global AI governance is built on non-binding frameworks, international standards, and multi-stakeholder agreements rather than enforceable treaties. The OECD AI Principles (2019, updated 2024) form the most widely adopted international baseline, incorporated into national AI strategies across more than 40 countries. The UNESCO Recommendation on the Ethics of AI (2021) adds a normative foundation covering human rights, sustainability, and cultural diversity. The G7 Hiroshima AI Code of Conduct and Bletchley Declaration on AI Safety reflect commitments by leading economies to frontier model governance and safety evaluation.

These frameworks carry no direct legal force, but they shape national legislation, inform corporate AI ethics commitments, and provide the normative vocabulary that regulators draw on when they do legislate. The EU AI Act's risk tiers and the NIST AI RMF's governance functions both trace their intellectual lineage to the OECD Principles. For organizations operating across multiple jurisdictions, fluency in these international frameworks supports a governance posture that translates across markets.

The Financial Stability Board and FATF have published sector-specific global guidance on AI in financial services and anti-money laundering, respectively, creating softer compliance expectations for systemically important institutions. The UN's emerging work on AI governance — including the Global Digital Compact — signals growing international attention to AI's impact on development, rights, and geopolitical stability.

Key themes

  • 1.OECD AI Principles as the global normative baseline
  • 2.G7 Hiroshima Code of Conduct on frontier model governance
  • 3.UNESCO Recommendation on AI Ethics — human rights and sustainability
  • 4.Financial Stability Board and FATF sector-specific guidance

Regulatory frameworks and guidance(9)

Guideline

Bletchley Declaration on AI Safety

A landmark multilateral political declaration signed at the inaugural AI Safety Summit, in which 28 governments agreed on the existence of frontier AI risks and committed to collaborative international action on AI safety, evaluation, and information sharing.

Guideline

FATF AI Anti-Money Laundering Guidance

FATF guidance on the use of AI and machine learning in anti-money laundering, counter-terrorism financing, and proliferation financing compliance, setting expectations for responsible AI adoption in transaction monitoring, customer due diligence, and suspicious activity detection.

Guideline

Financial Stability Board AI in Finance

The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.

Guideline

G7 Hiroshima AI Code of Conduct

A voluntary international code of conduct issued under the G7 Hiroshima AI Process, setting out eleven guiding principles and corresponding actions for developers and operators of advanced AI systems, with a particular focus on frontier and general-purpose AI models. The code operates as a soft-law commitment mechanism intended to shape responsible AI development globally.

Framework

OWASP Top 10 for Large Language Model Applications

The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.

Framework

The Role of Investors in AI Governance

This publication from the Oxford Martin AI Governance Initiative examines the responsibilities of investors in shaping AI governance outcomes through financing and oversight mechanisms. It addresses how capital allocation decisions by institutional investors, venture capital, and private equity can influence AI safety and accountability practices. The framework is intended to guide investors seeking to integrate AI risk considerations into due diligence, stewardship, and portfolio management.

Framework

Singapore Consensus on Global AI Safety Research Priorities

The Singapore Consensus is an international consensus document establishing a coordinated agenda for AI safety research priorities across jurisdictions and institutions. It emerged from a multilateral summit convened by the Singapore Government and reflects agreement among participating governments and organizations on where collaborative safety research efforts should be directed. The document does not impose binding legal obligations but provides a shared reference framework for national AI safety programs and research funding bodies.

Framework

Global Dialogue on AI Governance (UN General Assembly Resolution A/RES/79/325)

The UN General Assembly established the Global Dialogue on AI Governance through Resolution A/RES/79/325 to create an inclusive intergovernmental platform for deliberating on artificial intelligence challenges and priorities. It is open to UN member states and a broad range of stakeholders, including civil society and private sector participants. The submission portal accepts contributions through April 30, 2026, with collected input intended to shape emerging global AI governance priorities.

Guideline

Verifiable Semiconductor Manufacturing: Governance and Verification Systems for AI Supply Chain Oversight

This guideline from the Oxford Martin AI Governance Initiative addresses governance and verification mechanisms applied to semiconductor manufacturing in the context of AI supply chains. It is relevant to organizations involved in the design, production, procurement, or deployment of AI hardware at scale. The publication outlines approaches for establishing verifiable assurances about semiconductor provenance, integrity, and production standards.