AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← News

NACD Publishes 'Tuning Corporate Governance for AI Adoption' Guidance Citing 32% Rise in AI Incidents

Source

Tuning Corporate Governance for AI Adoption

National Association of Corporate Directors (NACD)

The National Association of Corporate Directors has released Tuning Corporate Governance for AI Adoption, a guidance document forming part of its 2025 Governance Outlook series targeting US-based corporate boards. The document presents a structured approach for directors to refine existing oversight mechanisms rather than build entirely new governance structures from scratch, emphasizing the integration of AI considerations into established risk, audit, and reporting frameworks. Central to its findings are two data points: a 26% year-over-year increase in AI incidents from 2022 to 2023, followed by an acceleration to more than 32% growth in 2024. The guidance specifically directs boards to assess how AI deployment shifts company-wide risk profiles and to define clear escalation and reporting pathways between management and the board.

The NACD guidance arrives at a moment when board-level accountability for AI is increasingly expected by regulators, investors, and institutional stakeholders, yet formal governance structures at most organizations remain immature or ad hoc. The documented acceleration in AI incident rates reflects a broader pattern visible across regulatory and institutional research, including work by the Financial Stability Board on AI in finance and the OECD's guidance on governing with AI, both of which have flagged the gap between AI adoption velocity and governance readiness. In the US context, the Securities and Exchange Commission has already signaled expectations around material risk disclosure, making board-level AI literacy and structured oversight not merely a best practice but a disclosure risk consideration. The NACD's framing that boards should adapt existing mechanisms, rather than create parallel governance structures, also aligns with emerging ISO 42001 implementation guidance, which similarly encourages integration of AI management into established organizational systems rather than siloed programs.

For enterprise compliance teams, the practical priority is ensuring that AI risk is formally embedded in the existing enterprise risk management cycle and that reporting lines to the board are clearly documented before the next governance review or shareholder engagement season. Legal, risk, and technology teams should collaborate to produce a current-state inventory of AI systems in production, mapped against the company's existing risk tolerance thresholds, so that any material shifts can be communicated to the board with specificity rather than generality. Given that AI incident rates are rising faster than many governance frameworks have been updated, compliance officers should also assess whether existing incident response and escalation procedures explicitly cover AI-specific failure modes, including model drift, data integrity failures, and third-party AI vendor incidents. Organizations subject to SEC reporting obligations, state-level AI transparency laws such as those in California and Colorado, or sector-specific AI requirements in financial services or healthcare should treat the NACD guidance as a prompt to audit whether board reporting on AI currently meets the standard of specificity and frequency that regulators and institutional investors are beginning to expect.

board governanceAI incident trackingrisk managementcorporate oversightenterprise compliance