AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← AI Governance Playbook

Question 27 of 34

How do we perform an AI risk assessment?

A practical methodology for assessing AI system risk before deployment, covering data sensitivity, decision impact, regulatory exposure, and the controls each risk level requires.

If you only do 3 things, do this:

  1. 1.Score every system on three dimensions before assigning a risk tier: what data it processes, how consequential its decisions are, and what regulations apply.
  2. 2.Use a consistent scoring rubric across all assessments so that risk tiers are comparable across your portfolio. Ad-hoc assessments are difficult to defend and impossible to aggregate.
  3. 3.Document the assessment, get sign-off, and schedule the next review. An assessment that lives only in someone's memory is not an assessment.

The Situation

Who this is for: Risk, compliance, and product teams evaluating AI systems before deployment or during annual reviews

When you need this: Before any new AI deployment, when a system's use case changes materially, or during annual risk reviews

The Decision

What is the risk level of this AI system, and what controls does that risk level require before we can deploy it?

The Steps

  1. 1Define the system: what does it do, what decisions does it influence or make, and who does it affect?
  2. 2Assess data sensitivity: what categories of data are processed? (personal data, special categories, regulated data)
  3. 3Assess decision impact: how consequential are the decisions? Are they reversible? Can an affected person contest or appeal?
  4. 4Assess regulatory exposure: which regulations apply based on jurisdiction, sector, and use case?
  5. 5Score each dimension (1–3) and assign a risk tier using your scoring rubric
  6. 6Document required controls for the assigned tier and confirm they are in place before deployment
  7. 7Get sign-off from the appropriate authority and schedule the next assessment

The Artifacts

  • AI risk assessment form (standardized questionnaire covering all dimensions)
  • Risk scoring rubric (data sensitivity × decision impact × regulatory exposure → risk tier)
  • Risk tier control requirements matrix (what each tier requires)
  • Risk assessment sign-off template
  • Assessment review schedule

The Output

A completed, signed risk assessment for the AI system, a documented risk tier with rationale, required controls identified and confirmed in place, and a review date set.

The five dimensions of AI risk

AI risk is multidimensional. A system that processes highly sensitive data but only makes suggestions to expert users is different from one that processes minimal data but makes automated decisions at scale. Effective risk assessment evaluates five dimensions: the sensitivity of the data processed; the impact and reversibility of the decisions influenced; the scale of deployment (how many people are affected); the degree of human oversight in the workflow; and the regulatory context (which laws apply).

Most organizations simplify to three primary dimensions: data sensitivity, decision impact, and regulatory exposure. This is a reasonable simplification that produces consistent, actionable results. The scoring should be calibrated to your specific context — what counts as "high" data sensitivity at a healthcare company differs from a marketing analytics firm.

Walking through the assessment

Data sensitivity assessment asks: what data does this system process? Assign a score of 1 if the system only processes aggregate or clearly non-personal data; 2 if it processes personal data without special categories; 3 if it processes special categories (health, biometric, financial, children's data, or other regulated categories).

Decision impact assessment asks: how consequential are the outputs? Assign 1 if the system only informs a human decision with no direct consequence; 2 if it significantly influences decisions that can affect individuals; 3 if it makes or effectively determines consequential decisions (hiring, credit, medical, legal, access to services). Irreversibility raises the score.

Regulatory exposure asks: which regulations specifically address this use case? Assign 1 if minimal regulatory attention applies; 2 if general privacy or sector regulations apply; 3 if specific AI regulations or high-risk classifications under major frameworks apply. The EU AI Act Annex III is a useful reference list for the highest regulatory exposures.

What the risk tier means in practice

High-risk systems (aggregate score 7–9) require the full governance treatment: technical documentation, bias assessment, human oversight arrangements, logging and audit trails, and a pre-deployment sign-off from risk and legal. They should be reviewed annually at minimum, and after any material change.

Medium-risk systems (score 4–6) require a documented risk assessment, basic testing records, a named owner, and a review schedule. They do not require the full documentation burden of high-risk systems, but they are not ungoverned. Low-risk systems (score 3) should appear in the model registry with basic metadata and a risk tier assignment, but do not require extensive supporting documentation.