aigovernance.com logo
aigovernance.com

Global AI Regulation & Framework Directory

← AI Governance Playbook

Question 5 of 24

How do we detect and mitigate algorithmic bias?

Testing protocols and audit trails for AI used in hiring, lending, or customer decisions, to defend against discrimination claims.

Bias is a legal and ethical risk

Algorithmic bias occurs when an AI system produces outcomes that are systematically less favorable to individuals in protected classes, even if the protected characteristic is not an explicit input. This can occur through proxy variables, historical bias in training data, or model design choices that have disparate impact.

For AI used in hiring, lending, housing, or access to services, algorithmic bias creates exposure under Title VII, the FCRA, the Equal Credit Opportunity Act, the Fair Housing Act, and their state equivalents. Regulators including the EEOC, CFPB, and HUD have all signaled active enforcement interest in AI-driven discrimination.

Testing protocols

Bias testing should be conducted before deployment and at regular intervals thereafter. The standard approach is disparate impact analysis: measure the selection rate for each protected group and compare to the most favored group. A ratio below 0.8 (the "four-fifths rule") indicates potential adverse impact requiring further investigation.

Testing should cover protected characteristics including race, sex, age, national origin, religion, and disability status, as well as intersectional combinations. Use both the training data and real-world outputs for testing. A model that performs well on historical data may behave differently on the population it is actually deployed against.

For high-risk systems, consider engaging an independent third party to conduct bias audits. New York City Local Law 144 requires bias audits by independent auditors for AI used in employment decisions affecting NYC residents, and similar requirements are likely to spread to other jurisdictions.

Mitigation and documentation

When bias is detected, mitigation options include retraining on more representative data, adjusting model thresholds, applying post-processing corrections, or replacing the system. Document every mitigation step taken: what was found, what was changed, and what the outcome was after remediation.

Maintain an audit trail that allows you to reconstruct the testing history of any deployed system. If a claim is filed or a regulator investigates, you need to demonstrate that you tested for bias, found the results acceptable, and took action when they were not. The absence of documentation is often treated as evidence of the absence of testing.