AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← News

NACD 2025 Governance Outlook Calls on Boards to Strengthen AI Oversight as Incidents Rise

The National Association of Corporate Directors (NACD) has published its 2025 Governance Outlook, a guidance document directed at corporate boards across the United States that calls for strengthened AI oversight in response to a measurable rise in AI-related incidents. Drawing on data from the AI Incident Database, the NACD reports that AI incidents increased 26% between 2022 and 2023, followed by a further increase exceeding 32% in 2024. The guidance identifies hallucinations, bias, and data privacy failures as the primary risk areas driving this trend, and calls on boards to implement tuned governance frameworks alongside updated reporting structures that give directors meaningful visibility into AI risk.

The publication reflects a broader shift in how institutional governance bodies in the United States are treating AI oversight, moving it from an operational concern to a board-level accountability expectation. The NACD guidance was developed in response to the gap between the pace of enterprise AI adoption and the maturity of existing board oversight mechanisms. Many corporate boards were structured around legacy risk categories such as financial controls and cybersecurity, and the NACD's intervention signals that AI now warrants equivalent structural attention. While the document is non-binding, NACD guidance carries significant weight among directors and institutional investors who use it as a benchmark for evaluating governance adequacy.

Enterprise compliance and risk teams should treat this guidance as an indicator of where director-level accountability expectations are heading and take concrete steps to align internal programs accordingly. Compliance professionals should audit whether current AI governance policies include clear escalation pathways to senior leadership and the board, and document how AI risk is currently reported in board materials. Organizations that have not yet established a responsible AI policy with defined accountability owners should prioritize doing so, with particular attention to the three risk categories the NACD names: hallucinations, bias, and data privacy failures. Given the non-binding but reputationally significant nature of NACD guidance, compliance teams should also prepare to demonstrate board engagement on AI risk in the event of regulatory inquiry, investor scrutiny, or an AI-related incident.

board oversightAI governanceAI incidentsrisk managementresponsible AI