Question 22 of 34
How do we apply a three lines of defense model to AI risk?
Published by AI Governance Institute · Practical Governance for Enterprise AI
The three lines of defense model translates directly to AI governance, with first-line business ownership, second-line risk oversight, and third-line independent assurance each requiring AI-specific adaptations.
If you only do 3 things, do this:
- 1.The first line must own AI risk genuinely, not just in the org chart. That means AI risk assessments are part of the project gate, not a compliance exercise after deployment.
- 2.The second line needs technical AI literacy. A risk function that can't evaluate a model card or interpret drift metrics cannot provide meaningful oversight.
- 3.The third line should assess whether the governance system is working — not just audit individual models.
The Situation
Who this is for: Chief Risk Officers, internal audit leaders, and compliance function heads applying established risk frameworks to AI
When you need this: When building or maturing an AI governance structure, or when a board or regulator asks how AI risk is overseen
The Decision
Are our three lines of defense configured to provide genuine, technically-credible AI risk oversight — or are we applying a legacy structure to a problem it wasn't designed for?
The Steps
- 1Audit first-line AI activities: are risk assessments conducted before deployment? Is there a model registry? Are controls embedded in development workflows?
- 2Audit second-line AI capabilities: does Risk have the technical literacy to review model documentation and challenge first-line assessments?
- 3If second-line technical capability is insufficient, identify options: hire AI risk specialists, embed technical advisors, or develop existing staff
- 4Define the AI-specific responsibilities of each line in writing, adapted from your standard RACI
- 5Build the third-line AI audit plan: focus on whether the governance system works, not just individual model reviews
- 6Report AI risk posture to board or risk committee on a defined cadence using a consistent dashboard
The Artifacts
- —Three lines of defense RACI for AI (activities × lines × responsibilities)
- —Second-line AI technical literacy assessment
- —AI risk committee reporting template (risk posture dashboard)
- —Third-line AI audit program (governance effectiveness assessment)
- —AI governance maturity model (self-assessment grid)
The Output
A documented three-lines framework for AI, with each line's responsibilities defined, technical capability gaps addressed, and a governance effectiveness audit plan in place.
Adapt the model for AI-specific risks
The three lines of defense model describes three distinct layers of oversight: business units that own and manage risk in the first line, risk management and compliance functions that provide oversight in the second line, and internal audit providing independent assurance in the third. For AI governance, this structure is directly applicable, but each line requires AI-specific adaptations to be effective.
The core challenge in applying three lines of defense to AI is that technical complexity sits in the first line, while governance expertise typically sits in the second and third lines. A second-line risk function that cannot evaluate a model's training data or interpret drift monitoring outputs cannot provide meaningful oversight. Organizations need to close this gap by building technical AI literacy into the second and third lines, or by embedding AI risk specialists within them.
First line: business ownership and embedded controls
The first line of defense for AI consists of the teams that build, deploy, and operate AI systems. Their responsibilities include conducting initial risk assessments before deployment, classifying each system according to the organization's risk framework, maintaining the model registry, monitoring model performance and triggering reviews when thresholds are breached, and escalating incidents.
First-line controls should be embedded in the development and deployment process, not added as a retrospective review. This means AI risk assessment is part of the project gate before a model goes to production. Bias and performance testing are part of the deployment checklist. Model documentation is produced as the system is built, not assembled after the fact. The first line does not exist to satisfy governance requirements. It exists to catch problems before they reach production.
Second and third lines: independent oversight and assurance
The second line of defense consists of the risk management and compliance functions that provide independent oversight of first-line activities. For AI, this includes maintaining the enterprise AI risk framework, reviewing first-line risk assessments for completeness and accuracy, monitoring the regulatory landscape and translating new obligations into internal requirements, and providing challenge to business units on high-risk deployments.
The third line, internal audit, provides periodic independent assurance that the overall AI governance framework is effective. Third-line AI audits should assess whether the model registry is complete and accurate, whether risk assessments were conducted properly, whether documented controls are actually operating as intended, and whether incidents were escalated and resolved appropriately. The output of a third-line AI audit is not a finding about a specific model. It is an assessment of whether the governance system itself is working.
