AI System Risk Classification
Assign every AI system a risk tier that determines the oversight requirements, review frequency, and documentation standards applied to it.
Objective
Ensure oversight effort is proportionate to harm potential by establishing a documented, consistently applied risk classification scheme for all AI systems.
Maturity Levels
Initial
Risk is assessed informally and inconsistently; no documented classification criteria exist.
Developing
Some AI systems have been classified, but criteria vary by team and are not formally documented.
Defined
A written classification framework with defined risk tiers is applied to all new AI systems at intake.
Managed
Classification decisions are reviewed periodically; metrics track the distribution of systems across tiers.
Optimizing
Classification triggers downstream controls automatically; feedback from incidents updates tier criteria.
Evidence Requirements
What an auditor or assessor would expect to see for this control.
- —AI risk classification register listing every system with assigned tier, criteria applied, classification date, and approver
- —Written risk classification framework document defining tier criteria and downstream obligations per tier
- —Completed intake forms or checklists showing classification was performed at system onboarding, not retroactively
- —Periodic reclassification review records (meeting notes or updated register entries) showing tier assignments are revisited
- —Distribution-of-systems report by tier, presented to AI Governance Committee or equivalent body
Implementation Notes
Key steps
- Define at least three tiers (e.g. Critical / Significant / Limited) with explicit criteria for each — decision domain, reversibility of outputs, population affected, and regulatory scope.
- Require classification at intake, not retroactively; build it into your AI procurement and development checklists.
- Pair tiers to concrete downstream obligations: what logging is required, how often models are reviewed, whether human approval gates apply.
- Revisit classification when a system's use case expands — scope creep is the most common cause of mis-classification.
Example Implementation
500-person fintech using ML for automated loan decisioning
AI Risk Classification Register — excerpt
| System | Risk Tier | Decision Domain | Reversibility | Review Cadence |
|---|---|---|---|---|
| Loan Scoring v2 | Critical | Credit decisions | Irreversible | Quarterly |
| Resume Screener | Significant | Hiring influence | Reversible | Semi-annual |
| Contract Summarizer | Limited | Advisory only | N/A | Annual |
Tier definitions:
- Critical — automated decisions with regulatory scope or irreversible harm; requires approval gate, quarterly review, full audit trail
- Significant — influences consequential outcomes; requires human review, semi-annual assessment
- Limited — advisory outputs, no automated action; standard logging, annual review
Control Details
- Control ID
- HOC-001
- Domain
- Human Oversight
- Typical owner
- AI Governance Team / Risk
- Implementation effort
- Medium effort
- Agent-relevant
- Yes
