AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Human Oversight
HOC · Human OversightHOC-001Medium effortAgent-relevant

AI System Risk Classification

Assign every AI system a risk tier that determines the oversight requirements, review frequency, and documentation standards applied to it.

Objective

Ensure oversight effort is proportionate to harm potential by establishing a documented, consistently applied risk classification scheme for all AI systems.

Maturity Levels

1

Initial

Risk is assessed informally and inconsistently; no documented classification criteria exist.

2

Developing

Some AI systems have been classified, but criteria vary by team and are not formally documented.

3

Defined

A written classification framework with defined risk tiers is applied to all new AI systems at intake.

4

Managed

Classification decisions are reviewed periodically; metrics track the distribution of systems across tiers.

5

Optimizing

Classification triggers downstream controls automatically; feedback from incidents updates tier criteria.

Evidence Requirements

What an auditor or assessor would expect to see for this control.

  • AI risk classification register listing every system with assigned tier, criteria applied, classification date, and approver
  • Written risk classification framework document defining tier criteria and downstream obligations per tier
  • Completed intake forms or checklists showing classification was performed at system onboarding, not retroactively
  • Periodic reclassification review records (meeting notes or updated register entries) showing tier assignments are revisited
  • Distribution-of-systems report by tier, presented to AI Governance Committee or equivalent body

Implementation Notes

Key steps

  • Define at least three tiers (e.g. Critical / Significant / Limited) with explicit criteria for each — decision domain, reversibility of outputs, population affected, and regulatory scope.
  • Require classification at intake, not retroactively; build it into your AI procurement and development checklists.
  • Pair tiers to concrete downstream obligations: what logging is required, how often models are reviewed, whether human approval gates apply.
  • Revisit classification when a system's use case expands — scope creep is the most common cause of mis-classification.

Example Implementation

500-person fintech using ML for automated loan decisioning

AI Risk Classification Register — excerpt

SystemRisk TierDecision DomainReversibilityReview Cadence
Loan Scoring v2CriticalCredit decisionsIrreversibleQuarterly
Resume ScreenerSignificantHiring influenceReversibleSemi-annual
Contract SummarizerLimitedAdvisory onlyN/AAnnual

Tier definitions:

  • Critical — automated decisions with regulatory scope or irreversible harm; requires approval gate, quarterly review, full audit trail
  • Significant — influences consequential outcomes; requires human review, semi-annual assessment
  • Limited — advisory outputs, no automated action; standard logging, annual review

Control Details

Control ID
HOC-001
Typical owner
AI Governance Team / Risk
Implementation effort
Medium effort
Agent-relevant
Yes

Tags

risk classificationAI inventorygovernance frameworktiering