International AI Standards: ISO, OECD, and UN
International standards bodies — ISO, IEC, OECD, and UN agencies — are the primary source of AI governance frameworks that apply across national borders without binding legal force. ISO/IEC 42001:2023 is the first international standard for AI management systems, providing an auditable framework organizations can certify against. ISO/IEC 23894 covers AI risk management. ISO/IEC 24028 addresses AI trustworthiness and bias. Together, these form a coherent suite for organizations that need a governance posture independent of any single jurisdiction's law.
The OECD AI Principles, originally adopted in 2019 and updated in 2024, are the most influential normative framework in global AI policy. They are integrated into the national AI strategies of more than 40 countries and have directly informed both the EU AI Act and the NIST AI RMF. The OECD's AI Incidents Monitor and AI Policy Observatory provide ongoing data that regulators and standards bodies use to track AI risk in practice.
ISO 42001 certification is increasingly valuable for organizations managing compliance across multiple jurisdictions simultaneously. The standard maps well to both the EU AI Act's governance requirements and the NIST AI RMF's Govern function, allowing a single certification effort to support compliance documentation across markets. For multinationals or organizations in early-stage regulatory environments, ISO 42001 alignment provides a defensible baseline that regulators recognize.
Key themes
- 1.ISO/IEC 42001:2023 — the primary certifiable AI management standard
- 2.ISO/IEC 23894 for AI risk management
- 3.OECD AI Principles as cross-border normative baseline
- 4.Framework interoperability with EU AI Act and NIST AI RMF
Regulatory frameworks and guidance(7)
ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System
The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.
ISO/IEC 23894 AI Risk Management
An international standard providing guidance on how organisations can manage AI-specific risks by integrating AI risk management into enterprise-wide risk management frameworks. ISO/IEC 23894 adapts the ISO 31000 risk management vocabulary and process model to the distinctive characteristics of AI systems, including emergent behaviour, data dependency, opacity, and sociotechnical complexity.
ISO/IEC 24028 AI Trustworthiness
ISO/IEC 24028:2020 provides a comprehensive overview of trustworthiness concepts, characteristics, and threats in AI systems, offering a structured technical and organizational framework for assessing and improving the trustworthiness of AI across the system lifecycle.
ISO/IEC 24029 Robustness of Neural Networks
ISO/IEC 24029 is a multi-part international standard providing formal methods and practical assessment approaches for evaluating the robustness of artificial neural networks (ANNs), addressing susceptibility to adversarial inputs, distributional shift, and other failure modes critical to trustworthy AI deployment.
OECD AI Principles
The OECD AI Principles are the first intergovernmental standard on AI, establishing five value-based principles and five recommendations for governments to promote trustworthy AI that respects human rights and democratic values.
Governing with Artificial Intelligence
This OECD report examines how governments are deploying artificial intelligence in public sector operations, drawing on analysis of real-world use cases across member and partner countries. It finds that 57% of documented government AI applications focus on streamlining public services and 45% on improving administrative decision-making. The report also identifies governance risks including biased training data, insufficient transparency, and institutional overreliance on automated outputs.
UNESCO Recommendation on the Ethics of Artificial Intelligence
The UNESCO Recommendation on the Ethics of Artificial Intelligence is the first global normative instrument on AI ethics adopted by a UN body, providing a comprehensive ethical framework and specific policy recommendations across eleven thematic areas for all 194 UNESCO Member States.
