aigovernance.com

Global AI Regulation & Framework Directory

39 entries
FrameworkAPAC/Australia

Australia AI Ethics Framework

A voluntary national framework establishing eight core ethical principles to guide the responsible design, development, and deployment of AI systems by Australian organisations.

Australian Government Department of Industry, Science and Resources (DISR)

GuidelineGlobal

Bletchley Declaration on AI Safety

A landmark multilateral political declaration signed at the inaugural AI Safety Summit, in which 28 governments agreed on the existence of frontier AI risks and committed to collaborative international action on AI safety, evaluation, and information sharing.

Twenty-eight signatory governments including the United States, United Kingdom, European Union, China, Australia, Canada, France, Germany, India, Japan, Republic of Korea, and Saudi Arabia; convened at the UK AI Safety Summit, Bletchley Park

RegulationUS

California AI Transparency Act

California's AI Transparency Act (SB 942) requires developers of generative AI systems that reach defined usage thresholds to provide AI detection tools and disclosure mechanisms so users and consumers can identify AI-generated content, establishing baseline transparency obligations for covered AI providers operating in or targeting California.

California State Legislature; administered by the California Attorney General

RegulationAPAC/China

China Algorithm Recommendation Regulations

China's first dedicated regulation governing algorithmic recommendation systems, imposing transparency, user-control, and content-moderation obligations on providers of internet-based recommendation services operating in or targeting users in China.

Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security (MPS), State Administration of Radio and Television (SART)

RegulationAPAC/China

China Deep Synthesis Regulations

China's Provisions on the Administration of Deep Synthesis Internet Information Services regulate the development, distribution, and use of deep synthesis technologies-including AI-generated or AI-manipulated text, images, audio, video, and virtual humans-by service providers and users operating in or targeting China. They impose labelling, registration, security-assessment, and content-moderation obligations.

Cyberspace Administration of China (CAC), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security (MPS)

RegulationAPAC/China

Interim Measures for the Management of Generative Artificial Intelligence Services

China's Interim Measures for the Management of Generative AI Services impose mandatory obligations on providers of generative AI products and services to the Chinese public, covering training data governance, content safety, algorithmic transparency, and registration requirements.

Cyberspace Administration of China (CAC), in conjunction with the National Development and Reform Commission (NDRC), Ministry of Education, Ministry of Science and Technology (MOST), Ministry of Industry and Information Technology (MIIT), Ministry of Public Security, and National Radio and Television Administration (NRTA)

RegulationUS

Colorado AI Act SB205

Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.

Colorado General Assembly; signed by Governor Jared Polis

RegulationEU

EU AI Liability Directive

A proposed EU directive that would have adapted civil liability rules to apply to AI systems, enabling individuals harmed by AI to obtain compensation by lowering evidentiary burdens through disclosure and causation presumptions. The proposal was formally withdrawn in early 2025 following failure to reach political agreement.

European Parliament and Council of the European Union

FrameworkEU

EU AI Office Framework

The institutional and regulatory framework establishing the European AI Office as the central EU body responsible for supervising general-purpose AI models, enforcing the EU AI Act at the supranational level, and coordinating AI governance across EU member states.

European AI Office, European Commission

RegulationEU

EU Cyber Resilience Act

The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.

European Parliament and Council of the European Union

RegulationEU

EU Data Act

The EU Data Act establishes harmonised rules on access to and use of data generated by connected products and related services across the EU, addressing both personal and non-personal data. It creates new obligations for data holders to share data with users and third parties, and sets conditions for public sector bodies to access privately held data in exceptional circumstances.

European Parliament and Council of the European Union

RegulationEU

EU Data Governance Act

The EU Data Governance Act establishes a regulatory framework for data intermediaries, data altruism organisations, and the re-use of public sector data protected by third-party rights. It creates new governance structures to facilitate trusted data sharing across sectors and member states, supporting the broader European Data Strategy.

European Parliament and Council of the European Union

RegulationEU

EU Digital Operational Resilience Act

The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.

European Parliament and Council of the European Union; supervised jointly by the European Banking Authority (EBA), European Insurance and Occupational Pensions Authority (EIOPA), and European Securities and Markets Authority (ESMA), collectively the European Supervisory Authorities (ESAs)

RegulationEU

EU Digital Services Act – AI and Algorithmic Accountability Provisions

The Digital Services Act imposes transparency, accountability, and risk-management obligations on online intermediaries with respect to algorithmic recommender systems, targeted advertising, and systemic risks. Obligations scale with platform size, with the most stringent requirements applying to VLOPs and VLOSEs.

European Parliament and Council of the European Union; enforced by Digital Services Coordinators (DSCs) in each Member State and by the European Commission for Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs)

FrameworkUS

Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence

Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.

Executive Office of the President of the United States

GuidelineGlobal

FATF AI Anti-Money Laundering Guidance

FATF guidance on the use of AI and machine learning in anti-money laundering, counter-terrorism financing, and proliferation financing compliance, setting expectations for responsible AI adoption in transaction monitoring, customer due diligence, and suspicious activity detection.

Financial Action Task Force (FATF)

GuidelineUS

FDA AI/ML Software as Medical Device Guidance

FDA's action plan and associated guidance documents establish a regulatory framework for AI/ML-based Software as a Medical Device (SaMD), introducing a total product lifecycle (TPLC) approach, predetermined change control plans, and transparency and monitoring requirements for adaptive AI/ML algorithms used in clinical settings.

U.S. Food and Drug Administration (FDA), Center for Devices and Radiological Health (CDRH)

GuidelineGlobal

Financial Stability Board AI in Finance

The FSB's thematic review and guidance on the use of artificial intelligence and machine learning in financial services, addressing financial stability risks, governance expectations, and supervisory considerations for financial institutions deploying AI/ML.

Financial Stability Board (FSB)

GuidelineUS

FTC AI Enforcement Policy

The FTC's AI enforcement posture, articulated through policy statements, guidance documents, and enforcement actions, applies existing consumer protection and competition statutes to AI-related harms including deceptive AI claims, discriminatory automated decisions, and unfair data practices underpinning AI systems.

Federal Trade Commission (FTC), United States

GuidelineGlobal

G7 Hiroshima AI Code of Conduct

A voluntary international code of conduct issued under the G7 Hiroshima AI Process, setting out eleven guiding principles and corresponding actions for developers and operators of advanced AI systems, with a particular focus on frontier and general-purpose AI models. The code operates as a soft-law commitment mechanism intended to shape responsible AI development globally.

G7 Hiroshima AI Process (endorsed by G7 Leaders; administered through the G7 Digital and Technology Track)

RegulationUS

Illinois Biometric Information Privacy Act – AI Provisions

The Illinois Biometric Information Privacy Act (BIPA), 740 ILCS 14, restricts the collection, storage, use, and disclosure of biometric identifiers and information, with direct implications for AI systems that process facial geometry, voiceprints, iris scans, and similar biometric data. BIPA is among the most litigated biometric privacy statutes in the United States.

Illinois General Assembly; administered and enforced through private right of action and the Illinois Attorney General

FrameworkAPAC/Singapore

IMDA Model AI Governance Framework

A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.

Infocomm Media Development Authority (IMDA) and Personal Data Protection Commission (PDPC), Singapore

GuidelineAPAC/India

India AI Governance Framework

MeitY's advisory framework establishes responsible AI principles and interim compliance expectations for platforms deploying AI systems in India, with particular focus on preventing harm, ensuring traceability, and requiring government approval before deploying undertested AI models.

Ministry of Electronics and Information Technology (MeitY), Government of India

FrameworkISO/OECD/UN

ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System

The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee ISO/IEC JTC 1, Subcommittee SC 42

FrameworkISO/OECD/UN

ISO/IEC 23894 AI Risk Management

An international standard providing guidance on how organisations can manage AI-specific risks by integrating AI risk management into enterprise-wide risk management frameworks. ISO/IEC 23894 adapts the ISO 31000 risk management vocabulary and process model to the distinctive characteristics of AI systems, including emergent behaviour, data dependency, opacity, and sociotechnical complexity.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee 1, Subcommittee 42 (JTC 1/SC 42)

FrameworkISO/OECD/UN

ISO/IEC 24028 AI Trustworthiness

ISO/IEC 24028:2020 provides a comprehensive overview of trustworthiness concepts, characteristics, and threats in AI systems, offering a structured technical and organizational framework for assessing and improving the trustworthiness of AI across the system lifecycle.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee JTC 1, Subcommittee SC 42 (Artificial Intelligence)

FrameworkISO/OECD/UN

ISO/IEC 24029 Robustness of Neural Networks

ISO/IEC 24029 is a multi-part international standard providing formal methods and practical assessment approaches for evaluating the robustness of artificial neural networks (ANNs), addressing susceptibility to adversarial inputs, distributional shift, and other failure modes critical to trustworthy AI deployment.

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee JTC 1, Subcommittee SC 42 (Artificial Intelligence)

GuidelineAPAC/Japan

Japan AI Guidelines for Business

METI guidelines providing Japanese businesses with practical AI governance standards aligned with the Hiroshima AI Process and international frameworks, addressing risk management, transparency, accountability, and intellectual property considerations across the AI lifecycle.

Ministry of Economy, Trade and Industry (METI), Japan

RegulationAPAC/Korea

Korea AI Basic Act

South Korea's foundational AI governance statute establishing risk-based obligations for AI developers and deployers, with heightened requirements for high-impact AI systems and a national AI safety infrastructure.

National Assembly of the Republic of Korea; Ministry of Science and ICT (MSIT)

GuidelineAPAC/Singapore

Principles to Promote Fairness, Ethics, Accountability and Transparency (FEAT) in the Use of Artificial Intelligence and Data Analytics in Singapore's Financial Sector

The MAS FEAT Principles establish a voluntary but authoritative framework for financial institutions in Singapore to govern the responsible use of AI and data analytics, structured around four principles: Fairness, Ethics, Accountability, and Transparency.

Monetary Authority of Singapore (MAS)

FrameworkUS

NIST AI 600-1 Generative AI Profile

A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.

National Institute of Standards and Technology (NIST), U.S. Department of Commerce

FrameworkUS

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

National Institute of Standards and Technology (NIST), U.S. Department of Commerce

RegulationUS

New York City Local Law 144 of 2021 – Automated Employment Decision Tools

Requires employers and employment agencies using automated employment decision tools (AEDTs) in New York City hiring or promotion decisions to conduct annual bias audits, publish audit results, and notify candidates prior to use.

New York City Council; administered by the New York City Department of Consumer and Worker Protection (DCWP)

FrameworkISO/OECD/UN

OECD AI Principles

The OECD AI Principles are the first intergovernmental standard on AI, establishing five value-based principles and five recommendations for governments to promote trustworthy AI that respects human rights and democratic values.

Organisation for Economic Co-operation and Development (OECD)

GuidelineUS

SEC AI Governance Guidance

The SEC has issued rules, guidance, and proposed rulemaking addressing AI governance obligations for registered investment advisers, broker-dealers, and public companies, focusing on conflicts of interest in predictive data analytics, AI-related disclosures in securities filings, and examination priorities targeting algorithmic systems.

U.S. Securities and Exchange Commission (SEC)

RegulationAPAC/Singapore

Singapore Personal Data Protection Act – AI and Automated Decision-Making Amendments

The Personal Data Protection (Amendment) Act 2020 and accompanying PDPC advisory guidelines address the use of personal data in AI and automated decision-making, introducing mandatory data breach notification, expanded accountability obligations, and guidance on responsible AI deployment under Singapore's Model AI Governance Framework.

Parliament of Singapore; administered by the Personal Data Protection Commission (PDPC) under the Ministry of Digital Development and Innovation (MDDI)

PendingUK

UK AI Regulation Framework

The UK AI Regulation Framework is a principles-based, sector-led approach to AI governance that delegates primary regulatory responsibility to existing sector regulators rather than establishing a unified AI-specific regulator. It is currently transitioning toward a more structured legislative footing following the Labour government's AI Opportunities Action Plan published in January 2025.

UK Department for Science, Innovation and Technology (DSIT); AI Safety Institute (AISI); sector regulators including FCA, ICO, CMA, Ofcom, MHRA

GuidelineUK

UK ICO Guidance on Artificial Intelligence and Data Protection

The UK ICO's guidance on AI and data protection establishes how the UK GDPR and Data Protection Act 2018 apply to the design, development, and deployment of AI systems that process personal data.

Information Commissioner's Office (ICO)

GuidelineISO/OECD/UN

UNESCO Recommendation on the Ethics of Artificial Intelligence

The UNESCO Recommendation on the Ethics of Artificial Intelligence is the first global normative instrument on AI ethics adopted by a UN body, providing a comprehensive ethical framework and specific policy recommendations across eleven thematic areas for all 194 UNESCO Member States.

United Nations Educational, Scientific and Cultural Organization (UNESCO)