Not sure where to start? Answer 3 questions and get a tailored compliance action plan.
What applies to me? →Australia AI Ethics Framework
A voluntary national framework establishing eight core ethical principles to guide the responsible design, development, and deployment of AI systems by Australian organisations.
Executive Order 14318: Accelerating Federal Permitting of Data Center Infrastructure
Executive Order 14318, signed by President Trump, directs federal agencies to streamline and accelerate permitting processes for data center infrastructure in support of domestic AI development. It applies to federal agencies with authority over land use, environmental review, and energy approvals relevant to data center siting and construction. The order aims to reduce regulatory delays that the administration identified as barriers to AI infrastructure growth in the United States.
Executive Order 14319: Preventing Woke AI in the Federal Government
Executive Order 14319, signed by President Trump, directs federal agencies to avoid AI systems that incorporate ideological bias or viewpoint-discriminatory outputs in government operations. It applies to federal agencies procuring, deploying, or contracting AI tools for government use. The order aligns with the administration's broader deregulatory posture toward AI and rescinds or modifies prior guidance perceived as imposing politically or ideologically motivated constraints on AI development.
EU AI Office Framework
The institutional and regulatory framework establishing the European AI Office as the central EU body responsible for supervising general-purpose AI models, enforcing the EU AI Act at the supranational level, and coordinating AI governance across EU member states.
EU Code of Practice on Marking and Labelling of AI-Generated Content
The European Commission published a first draft of a voluntary code of practice establishing common standards for marking and labelling AI-generated content across the EU. It targets organisations that produce, distribute, or deploy AI systems capable of generating text, images, audio, or video. The code aims to improve transparency and consumer awareness by requiring identifiable disclosure when content is AI-generated.
Executive Order 14110 on Safe, Secure, and Trustworthy Artificial Intelligence
Landmark U.S. presidential directive establishing comprehensive federal requirements for the safe development and deployment of AI, including mandatory safety reporting for frontier models, standards development mandates to NIST, and cross-agency coordination obligations.
IMDA Model AI Governance Framework
A voluntary, principles-based framework issued by Singapore's IMDA and PDPC providing organisations with detailed guidance on deploying AI responsibly, covering human oversight, decision-making accountability, and operational transparency.
ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System
The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.
ISO/IEC 23894 AI Risk Management
An international standard providing guidance on how organisations can manage AI-specific risks by integrating AI risk management into enterprise-wide risk management frameworks. ISO/IEC 23894 adapts the ISO 31000 risk management vocabulary and process model to the distinctive characteristics of AI systems, including emergent behaviour, data dependency, opacity, and sociotechnical complexity.
ISO/IEC 24028 AI Trustworthiness
ISO/IEC 24028:2020 provides a comprehensive overview of trustworthiness concepts, characteristics, and threats in AI systems, offering a structured technical and organizational framework for assessing and improving the trustworthiness of AI across the system lifecycle.
ISO/IEC 24029 Robustness of Neural Networks
ISO/IEC 24029 is a multi-part international standard providing formal methods and practical assessment approaches for evaluating the robustness of artificial neural networks (ANNs), addressing susceptibility to adversarial inputs, distributional shift, and other failure modes critical to trustworthy AI deployment.
NIST AI 600-1 Generative AI Profile
A companion resource to the NIST AI RMF 1.0 that provides structured guidance for managing the unique risks presented by generative AI systems, including large language models and multimodal foundation models.
NIST Artificial Intelligence Risk Management Framework Playbook
Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.
OECD AI Principles
The OECD AI Principles are the first intergovernmental standard on AI, establishing five value-based principles and five recommendations for governments to promote trustworthy AI that respects human rights and democratic values.
Governing with Artificial Intelligence
This OECD report examines how governments are deploying artificial intelligence in public sector operations, drawing on analysis of real-world use cases across member and partner countries. It finds that 57% of documented government AI applications focus on streamlining public services and 45% on improving administrative decision-making. The report also identifies governance risks including biased training data, insufficient transparency, and institutional overreliance on automated outputs.
OWASP Top 10 for Large Language Model Applications
The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.
The Role of Investors in AI Governance
This publication from the Oxford Martin AI Governance Initiative examines the responsibilities of investors in shaping AI governance outcomes through financing and oversight mechanisms. It addresses how capital allocation decisions by institutional investors, venture capital, and private equity can influence AI safety and accountability practices. The framework is intended to guide investors seeking to integrate AI risk considerations into due diligence, stewardship, and portfolio management.
Singapore Consensus on Global AI Safety Research Priorities
The Singapore Consensus is an international consensus document establishing a coordinated agenda for AI safety research priorities across jurisdictions and institutions. It emerged from a multilateral summit convened by the Singapore Government and reflects agreement among participating governments and organizations on where collaborative safety research efforts should be directed. The document does not impose binding legal obligations but provides a shared reference framework for national AI safety programs and research funding bodies.
Singapore Global AI Assurance Sandbox
The Singapore Global AI Assurance Sandbox provides a controlled regulatory environment for organizations to test and validate AI systems against safety, reliability, and accountability standards before or during deployment. It is open to enterprises and AI developers seeking structured assurance pathways under Singapore regulatory oversight. Participants operate under defined sandbox conditions that temporarily relax certain compliance obligations in exchange for structured testing commitments and reporting.
UK AI Opportunities Action Plan
The UK AI Opportunities Action Plan is a government-issued strategic framework published in January 2025 that sets out the Labour government's agenda for accelerating AI adoption and infrastructure investment across the United Kingdom. It applies to public sector bodies, AI developers, and enterprises operating or investing in AI in the UK. Key commitments include the creation of dedicated AI Growth Zones, expansion of compute infrastructure, and the establishment of a National Data Library to facilitate access to public data for AI development.
Global Dialogue on AI Governance (UN General Assembly Resolution A/RES/79/325)
The UN General Assembly established the Global Dialogue on AI Governance through Resolution A/RES/79/325 to create an inclusive intergovernmental platform for deliberating on artificial intelligence challenges and priorities. It is open to UN member states and a broad range of stakeholders, including civil society and private sector participants. The submission portal accepts contributions through April 30, 2026, with collected input intended to shape emerging global AI governance priorities.
Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence
Executive Order 14179 directs federal agencies to develop a new national AI Action Plan prioritizing US dominance in AI development and deployment. It applies to federal agencies and shapes the regulatory environment for private-sector AI developers and deployers operating in the United States. The order explicitly revokes prior AI-related executive orders focused on safety guardrails, reorienting federal AI policy toward deregulation and competitiveness.
Ensuring a National Policy Framework for Artificial Intelligence
This Executive Order, issued on December 11, 2025, directs federal agencies to establish a unified national AI policy and authorizes preemption of state AI laws that conflict with that policy. It applies to federal agencies, the Federal Trade Commission, and by extension any entity subject to state AI regulations that may be challenged under the order. Key requirements include a Commerce Department evaluation of state AI laws within 90 days, an FTC policy statement on unfair and deceptive AI practices within 90 days, and the creation of an AI Litigation Task Force.
Treasury Department AI Risk Management Framework for Financial Services
The US Treasury Department published this framework in February 2026 to translate NIST AI Risk Management Framework principles into 230 discrete operational control objectives tailored to the financial services sector. It applies to financial institutions subject to Treasury oversight, including banks, asset managers, insurance companies, and payment processors that develop or deploy AI systems. Covered entities are expected to implement controls spanning model lifecycle governance, identity resolution, data governance, and alignment with existing SOC 2 and NIST Cybersecurity Framework requirements.
