AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Human Oversight
HOC · Human OversightHOC-004Medium effort

Automation Bias Prevention

Implement measures to detect and counteract the tendency for human reviewers to defer to AI recommendations without adequate critical evaluation.

Objective

Maintain the integrity of human oversight by ensuring reviewers are actively engaging with AI outputs rather than reflexively approving them.

Maturity Levels

1

Initial

No awareness of automation bias risk; override rates are not tracked.

2

Developing

Training materials mention automation bias but no structural interventions are in place.

3

Defined

Override rates are tracked by reviewer and use case; anomalously low rates trigger investigation.

4

Managed

Periodic calibration exercises test reviewer accuracy; automation bias is a standing item in governance reviews.

5

Optimizing

Review workflow design is continuously refined based on bias detection data; A/B testing of interface interventions is routine.

Evidence Requirements

What an auditor or assessor would expect to see for this control.

  • Monthly override rate reports by reviewer, team, and use case showing trend over time
  • Calibration exercise results showing seeded-error detection rates by reviewer, with pass/fail against defined threshold
  • Investigation records for any reviewer or team whose override rate fell below the defined alert threshold
  • Approval workflow UI configuration confirming friction controls are active (mandatory rationale field, one-click approval disabled for low-confidence outputs)
  • Training records showing completion of automation bias awareness module for all active reviewers

Implementation Notes

Key steps

  • Track override rates by reviewer, team, and use case — a sustained rate below 2-3% for high-stakes decisions warrants investigation.
  • Introduce friction by design: require reviewers to input a brief rationale before approving, rather than offering one-click approval.
  • Run periodic calibration tests: present reviewers with known-incorrect AI outputs and measure detection rates.
  • Rotate reviewers on high-volume queues to prevent habituation.

Example Implementation

HR team using AI resume screening across three hiring groups

Automation Bias Monitoring — Resume Screening Queue

Monthly report to AI Governance Committee:

TeamQueue VolumeOverride RateFlag ThresholdStatus
Engineering Hiring3404.1%< 2%OK
Sales Hiring2801.6%< 2%Review required
Operations1905.3%< 2%OK

Friction controls in place:

  • Reviewers must select a disposition before approving: STRONG_FIT | ADEQUATE | BORDERLINE | STRETCH
  • One-click approval disabled for all queues with AI confidence < 0.90
  • Calibration exercise run quarterly: reviewers assess 10 seeded known-incorrect outputs; detection rate < 70% triggers mandatory re-training

Control Details

Control ID
HOC-004
Typical owner
AI Governance Team / HR / Training
Implementation effort
Medium effort
Agent-relevant
No

Tags

automation biashuman oversightreviewer behaviorcognitive bias