AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Topics

AI in Healthcare and Life Sciences

Healthcare AI operates under uniquely high stakes: errors affect patient safety, systems process sensitive health data, and regulatory approval pathways are complex. Regulators in the US and EU have issued specific guidance for AI as a medical device, clinical decision support, and health data processing. This topic covers the frameworks, governance controls, and monitoring requirements most critical for healthcare organizations and life sciences companies deploying AI.

Key board-level questions

  • 1.Which of our AI systems qualify as Software as a Medical Device (SaMD) and require FDA or regulatory clearance?
  • 2.How do we monitor AI clinical decision support tools for drift, bias, and patient safety incidents?
  • 3.Are our health AI systems trained on data that meets HIPAA, GDPR, and applicable health data protection requirements?
  • 4.Do we have a defined process for human clinical oversight of AI-driven diagnoses or treatment recommendations?

Regulatory frameworks

US

FDA AI/ML Software as Medical Device Guidance

FDA's action plan and associated guidance documents establish a regulatory framework for AI/ML-based Software as a Medical Device (SaMD), introducing a total product lifecycle (TPLC) approach, predetermined change control plans, and transparency and monitoring requirements for adaptive AI/ML algorithms used in clinical settings.

EU

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

US

NIST Artificial Intelligence Risk Management Framework Playbook

Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.

UK

UK ICO Guidance on Artificial Intelligence and Data Protection

The UK ICO's guidance on AI and data protection establishes how the UK GDPR and Data Protection Act 2018 apply to the design, development, and deployment of AI systems that process personal data.

ISO/OECD/UN

ISO/IEC 42001:2023 – Information Technology – Artificial Intelligence – Management System

The first internationally certified AI management system standard, providing a structured framework for establishing, implementing, maintaining, and continually improving an organization's AI management system (AIMS), with certifiable requirements applicable to any organization that develops, provides, or uses AI-based products and services.

Playbook guidance