AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

ISO/IEC 23894 AI Risk Management

Issued by

International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC), Joint Technical Committee 1, Subcommittee 42 (JTC 1/SC 42)

liveEffective 2023-02-01ISO/IEC 23894Verified April 2026
Official document →

An international standard providing guidance on how organisations can manage AI-specific risks by integrating AI risk management into enterprise-wide risk management frameworks. ISO/IEC 23894 adapts the ISO 31000 risk management vocabulary and process model to the distinctive characteristics of AI systems, including emergent behaviour, data dependency, opacity, and sociotechnical complexity.

Applies To

ISO/IEC 23894 is applicable to any organisation that developsdeploysprocuresor oversees AI systemsregardless of sizesectoror jurisdiction. In practicethe primary audiences are: AI system developers and technology vendors seeking a structured methodology for pre-deployment risk assessment; enterprise AI governanceriskand compliance (GRC) functions responsible for AI oversight; regulated entities in financial serviceshealthcareinsuranceand critical infrastructure where AI risk management is increasingly subject to supervisory scrutiny; and procurement and vendor management teams conducting AI due diligence on third-party suppliers. The standard is particularly relevant for organisations implementing ISO/IEC 42001 (AI management systems) or responding to regulatory frameworks that reference SC 42 standardsincluding the EU AI Act's harmonised standards process.

Overview

ISO/IEC 23894:2023, titled 'Information technology, Artificial intelligence, Guidance on risk management,' was published in February 2023 by ISO/IEC JTC 1/SC 42, the subcommittee responsible for AI standardisation. It is the primary international standard addressing risk management as a practice specifically tailored to AI systems, and it sits within a broader SC 42 standards ecosystem that includes ISO/IEC 22989 (AI concepts and terminology), ISO/IEC 42001 (AI management systems), and ISO/IEC TR 24368 (ethical considerations). The standard is explicitly guidance-based rather than requirements-based, it does not use 'shall' language and is not a certifiable management system standard. Certification against AI-specific requirements is addressed by ISO/IEC 42001, which was published later in 2023 and cross-references 23894. The framework adapts the well-established ISO 31000:2018 risk management principles, framework, and process to the AI context, acknowledging that AI systems introduce risk characteristics that differ materially from conventional software: datasets may be biased or unrepresentative; model behaviour may be opaque or difficult to explain; emergent capabilities may arise unexpectedly; and the sociotechnical contexts in which AI operates generate harms that extend beyond the technical artefact itself. ISO/IEC 23894 provides guidance on establishing AI risk context, identifying AI-specific risk sources, analysing and evaluating AI risks, and treating those risks through controls, monitoring, and review. It addresses risks across the AI system lifecycle, from design and data acquisition through deployment, operation, monitoring, and decommissioning. The standard incorporates references to responsible AI dimensions including fairness, transparency, robustness, privacy, safety, and security, and it explicitly references the OECD AI Principles as a normative touchstone. Regulators and procurement authorities in multiple jurisdictions, including the EU (via AI Act harmonised standards process), UK, Singapore, and Canada, have cited ISO/IEC 23894 or its SC 42 sibling standards as relevant technical references. Enterprises seeking to demonstrate structured AI risk management to regulators, auditors, or counterparties will find 23894 a practical baseline, particularly in conjunction with ISO/IEC 42001 for management system certification.

Key Requirements

  • Scope and context establishment: Organisations should define the internal and external context of AI risk management, including the AI system's intended purpose, operational environment, stakeholders, and applicable regulatory requirements.
  • Risk identification: Guidance is provided on identifying AI-specific risk sources including data quality and bias, model opacity, unintended emergent behaviour, adversarial attacks, misuse scenarios, and sociotechnical impacts on affected individuals and communities.
  • Risk analysis: Organisations should analyse the likelihood and consequence of identified AI risks, taking into account the specific properties of AI systems (e.g., distributional shift, concept drift, feedback loops).
  • Risk evaluation: Risks should be evaluated against organisational and stakeholder risk criteria to determine which risks require treatment and in what priority order.
  • Risk treatment: A range of treatment options is described, including risk avoidance, mitigation (technical and procedural controls), risk transfer, and acceptance — with particular attention to human oversight mechanisms.
  • AI-specific risk sources: The standard provides a structured catalogue of AI risk sources organised around the AI system lifecycle, including design decisions, training data, model development, integration, deployment, and post-deployment monitoring.
  • Responsible AI integration: The framework is designed to operationalise responsible AI principles — including fairness, accountability, transparency, safety, and privacy — as inputs to risk identification and treatment rather than abstract aspirations.
  • Lifecycle coverage: Risk management activities should be applied iteratively and continuously across the full AI system lifecycle, not treated as a one-time pre-deployment exercise.
  • Documentation and communication: Organisations should document risk management decisions and communicate AI-related risks to relevant internal and external stakeholders in a form appropriate to the audience.
  • Integration with enterprise risk management: The standard explicitly recommends integrating AI risk management with existing organisational risk management frameworks (e.g., ISO 31000, COSO ERM) rather than treating it as a siloed function.

What Your Organization Must Do

  • Appoint a named AI Risk Owner (typically the Chief Risk Officer or Head of AI Governance) to formally integrate ISO/IEC 23894 into the enterprise risk management framework, aligning with existing ISO 31000 or COSO ERM structures by Q2 of the current planning cycle.
  • Conduct an AI system inventory and context-setting exercise for all in-scope AI systems, documenting intended purpose, operational environment, affected stakeholders, and applicable regulatory requirements as the foundational input to risk identification under the standard.
  • Build and maintain an AI-specific risk register using the standard's lifecycle-organised catalogue of risk sources, covering data quality and bias, model opacity, emergent behaviour, adversarial threats, and sociotechnical impacts, and review this register at least annually and upon any material change to an AI system.
  • Define quantitative and qualitative risk evaluation criteria and tolerance thresholds at the organisational level before assessing individual AI systems, ensuring evaluations produce consistent, auditable prioritisation decisions that can be reported to the board or risk committee.
  • Implement documented risk treatment plans for each AI system rated above the accepted threshold, specifying technical and procedural controls, human oversight mechanisms, responsible owners, and target completion dates, and retain evidence to support regulatory inquiries or third-party audits.
  • Align ISO/IEC 23894 implementation with ISO/IEC 42001 certification efforts where pursued, using 23894 risk outputs as direct inputs to the 42001 management system, and reference both standards explicitly in vendor due diligence questionnaires and procurement contracts involving third-party AI suppliers.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Is ISO/IEC 23894 a certifiable standard or a guidance document?
ISO/IEC 23894 is a guidance document, not a certifiable management system standard. It uses advisory rather than mandatory language and does not support third-party certification. Organisations seeking certifiable AI risk requirements should look to ISO/IEC 42001, which cross-references 23894 and was published later in 2023.
How does ISO/IEC 23894 differ from ISO 31000 for AI risk management purposes?
ISO/IEC 23894 adapts the ISO 31000 risk management process and vocabulary specifically to AI systems, adding guidance on AI-specific risk sources such as dataset bias, model opacity, distributional shift, emergent behaviour, and sociotechnical harms. ISO 31000 alone does not address these AI-specific characteristics.
Does the EU AI Act reference ISO/IEC 23894 as a harmonised standard?
ISO/IEC 23894 is part of the ISO/IEC JTC 1/SC 42 standards ecosystem that the EU AI Act's harmonised standards process draws on. While formal harmonisation designations are confirmed by the European Commission, compliance with SC 42 standards including 23894 is widely cited as relevant technical evidence for meeting AI Act risk management obligations.
Which sectors are most likely to face regulatory pressure to implement ISO/IEC 23894?
Financial services, healthcare, insurance, and critical infrastructure operators face the strongest supervisory pressure, as regulators in these sectors increasingly expect structured AI risk management aligned with recognised international standards. Organisations in these sectors using AI in high-stakes decision-making should treat 23894 adoption as a baseline compliance expectation rather than a voluntary choice.
How does ISO/IEC 23894 relate to ISO/IEC 42001, and which should an organisation implement first?
ISO/IEC 23894 provides the risk management methodology that feeds directly into ISO/IEC 42001, the certifiable AI management system standard. Organisations pursuing 42001 certification should implement 23894 first to generate the risk outputs that 42001 requires, treating the two standards as complementary rather than alternative frameworks.
Can ISO/IEC 23894 be used in vendor due diligence and third-party AI procurement?
Yes. Procurement and vendor management teams can reference ISO/IEC 23894 in supplier questionnaires and contract requirements to assess whether AI vendors have conducted structured lifecycle risk assessments. Explicitly citing the standard sets a defensible, internationally recognised benchmark for third-party AI risk expectations and supports audit trails for regulatory inquiries.