AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Topics

AI in Critical Infrastructure and Energy

AI deployed in power grids, water systems, telecommunications networks, nuclear facilities, and oil and gas operations operates in environments where failure can have catastrophic, irreversible consequences. The EU AI Act classifies AI systems managing critical infrastructure as high-risk, requiring conformity assessments, human oversight, and robustness testing. Cyber resilience obligations under DORA and the CRA impose additional requirements on digital systems in critical sectors.

Key board-level questions

  • 1.Are our AI systems managing critical operations classified and governed as high-risk under applicable law?
  • 2.What fail-safes and human override mechanisms exist for AI systems controlling physical infrastructure?
  • 3.How do we test AI systems for adversarial robustness and cybersecurity vulnerabilities before operational deployment?
  • 4.Are third-party AI vendors in our infrastructure supply chain subject to the same security and resilience standards as our own systems?

Regulatory frameworks

EU

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

EU

EU Cyber Resilience Act

The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.

EU

EU Digital Operational Resilience Act

The EU Digital Operational Resilience Act (DORA), Regulation (EU) 2022/2554, establishes a comprehensive ICT risk management, incident reporting, operational resilience testing, and third-party risk oversight framework for EU financial entities, with direct implications for AI systems deployed in financial services and the technology providers that supply them.

US

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

Global

OWASP Top 10 for Large Language Model Applications

The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.

Playbook guidance