AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← AI Governance Playbook

Question 5 of 34

How do we detect and mitigate algorithmic bias?

Published by AI Governance Institute · Practical Governance for Enterprise AI

Testing protocols and audit trails for AI used in hiring, lending, or customer decisions, to defend against discrimination claims.

If you only do 3 things, do this:

  1. 1.Run a disparate impact analysis using the four-fifths rule on every AI system that influences decisions about people — hiring, credit, benefits, access to services.
  2. 2.Test before deployment and at regular intervals after. A model that passes at launch may drift into biased behavior as your population changes.
  3. 3.Document every finding and every remediation step. The absence of documentation is typically treated as evidence of the absence of testing.

The Situation

Who this is for: AI/ML engineers, compliance teams, and HR or legal functions responsible for AI used in employment, lending, or customer decisions

When you need this: Before deploying any decision-making AI, and at regular intervals for systems already in production

The Decision

Is this AI system producing systematically unfair outcomes for protected groups, and do we have the documentation to defend the answer?

The Steps

  1. 1Identify all AI systems that influence decisions about individuals (hiring, credit, insurance, benefits, services)
  2. 2Define which protected characteristics are relevant for each system
  3. 3Run disparate impact analysis: calculate selection rates by group, apply the four-fifths rule as a screening threshold
  4. 4Compute confusion matrices by subgroup for binary classifiers; compare score distributions for scoring models
  5. 5For any system that fails screening, diagnose the source of bias (training data, proxy variables, model design)
  6. 6Apply remediation (resampling, reweighting, post-processing, or retraining) and re-test
  7. 7Document all findings, remediation steps, and before/after metrics

The Artifacts

  • Bias testing protocol (step-by-step methodology for your system type)
  • Disparate impact analysis template (selection rates by group, four-fifths rule calculator)
  • Confusion matrix by subgroup template
  • Bias remediation decision tree (which technique based on bias source)
  • Bias testing audit trail template (findings, remediation, before/after metrics)

The Output

A documented bias assessment for every decision-making AI system, with findings, remediation actions taken, and a re-test schedule.

Bias is a legal and ethical risk

Algorithmic bias occurs when an AI system produces outcomes that are systematically less favorable to individuals in protected classes, even if the protected characteristic is not an explicit input. This can occur through proxy variables, historical bias in training data, or model design choices that have disparate impact.

For AI used in hiring, lending, housing, or access to services, algorithmic bias creates exposure under Title VII, the FCRA, the Equal Credit Opportunity Act, the Fair Housing Act, and their state equivalents. Regulators including the EEOC, CFPB, and HUD have all signaled active enforcement interest in AI-driven discrimination.

Testing protocols

Bias testing should be conducted before deployment and at regular intervals thereafter. The standard approach is disparate impact analysis: measure the selection rate for each protected group and compare to the most favored group. A ratio below 0.8 (the "four-fifths rule") indicates potential adverse impact requiring further investigation.

Testing should cover protected characteristics including race, sex, age, national origin, religion, and disability status, as well as intersectional combinations. Use both the training data and real-world outputs for testing. A model that performs well on historical data may behave differently on the population it is actually deployed against.

For high-risk systems, consider engaging an independent third party to conduct bias audits. New York City Local Law 144 requires bias audits by independent auditors for AI used in employment decisions affecting NYC residents, and similar requirements are likely to spread to other jurisdictions.

Mitigation and documentation

When bias is detected, mitigation options include retraining on more representative data, adjusting model thresholds, applying post-processing corrections, or replacing the system. Document every mitigation step taken: what was found, what was changed, and what the outcome was after remediation.

Maintain an audit trail that allows you to reconstruct the testing history of any deployed system. If a claim is filed or a regulator investigates, you need to demonstrate that you tested for bias, found the results acceptable, and took action when they were not. The absence of documentation is often treated as evidence of the absence of testing.

Governance Controls

Operational controls that implement the guidance in this playbook.