Question 22 of 24
How do we apply a three lines of defense model to AI risk?
The three lines of defense model translates directly to AI governance, with first-line business ownership, second-line risk oversight, and third-line independent assurance each requiring AI-specific adaptations.
Adapt the model for AI-specific risks
The three lines of defense model describes three distinct layers of oversight: business units that own and manage risk in the first line, risk management and compliance functions that provide oversight in the second line, and internal audit providing independent assurance in the third. For AI governance, this structure is directly applicable, but each line requires AI-specific adaptations to be effective.
The core challenge in applying three lines of defense to AI is that technical complexity sits in the first line, while governance expertise typically sits in the second and third lines. A second-line risk function that cannot evaluate a model's training data or interpret drift monitoring outputs cannot provide meaningful oversight. Organizations need to close this gap by building technical AI literacy into the second and third lines, or by embedding AI risk specialists within them.
First line: business ownership and embedded controls
The first line of defense for AI consists of the teams that build, deploy, and operate AI systems. Their responsibilities include conducting initial risk assessments before deployment, classifying each system according to the organization's risk framework, maintaining the model registry, monitoring model performance and triggering reviews when thresholds are breached, and escalating incidents.
First-line controls should be embedded in the development and deployment process, not added as a retrospective review. This means AI risk assessment is part of the project gate before a model goes to production. Bias and performance testing are part of the deployment checklist. Model documentation is produced as the system is built, not assembled after the fact. The first line does not exist to satisfy governance requirements. It exists to catch problems before they reach production.
Second and third lines: independent oversight and assurance
The second line of defense consists of the risk management and compliance functions that provide independent oversight of first-line activities. For AI, this includes maintaining the enterprise AI risk framework, reviewing first-line risk assessments for completeness and accuracy, monitoring the regulatory landscape and translating new obligations into internal requirements, and providing challenge to business units on high-risk deployments.
The third line, internal audit, provides periodic independent assurance that the overall AI governance framework is effective. Third-line AI audits should assess whether the model registry is complete and accurate, whether risk assessments were conducted properly, whether documented controls are actually operating as intended, and whether incidents were escalated and resolved appropriately. The output of a third-line AI audit is not a finding about a specific model. It is an assessment of whether the governance system itself is working.
