AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Topics

AI in Transportation and Logistics

Transportation organizations are deploying AI for autonomous and semi-autonomous vehicle systems, route and fleet optimization, air traffic management, predictive maintenance, and supply chain automation. AI in safety-critical transportation contexts is classified as high-risk under the EU AI Act, triggering mandatory conformity assessment, human oversight requirements, and robustness obligations. Autonomous vehicle regulation is evolving rapidly across jurisdictions with significant variation in permitted use cases and liability frameworks.

Key board-level questions

  • 1.Which AI systems in our operations meet the threshold for high-risk classification under applicable law, and do we have conformity assessments in place?
  • 2.What human oversight and intervention capabilities exist for AI systems controlling or influencing physical safety outcomes?
  • 3.How do we monitor AI performance in real-world operating conditions and detect degradation before safety incidents occur?
  • 4.Is our liability framework — including insurance and contractual allocation — adequate for AI-related transportation incidents?

Regulatory frameworks

EU

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

EU

EU Cyber Resilience Act

The EU Cyber Resilience Act establishes mandatory cybersecurity requirements for products with digital elements placed on the EU market, including hardware and software incorporating AI components, covering the entire product lifecycle from design through end-of-life.

US

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

ISO/OECD/UN

ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System

The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.

Playbook guidance