AI Explainability Documentation
Document how AI systems reach decisions in sufficient detail that affected individuals, reviewers, and regulators can understand and challenge outcomes.
Objective
Enable meaningful oversight and redress by ensuring AI decision logic is documented at a level of detail appropriate to the system's risk and regulatory context.
Maturity Levels
Initial
No explainability documentation exists; AI systems operate as undocumented black boxes.
Developing
High-level descriptions of model purpose exist but decision logic and feature importance are not documented.
Defined
Documented explanations are available for all high-risk AI decisions, including the key factors influencing each outcome.
Managed
Explanation quality is validated by subject matter experts; affected individuals can request and receive explanations.
Optimizing
Explanation methods are continuously improved for accuracy and comprehensibility; user testing informs explanation design.
Evidence Requirements
What an auditor or assessor would expect to see for this control.
- —Model-level documentation including model type, key features, known limitations, and evaluation results (model card)
- —Instance-level explanation examples in plain language, reviewed by a subject matter expert for accuracy
- —Explanation request log showing requests were received and responses delivered within the defined SLA
- —Accuracy validation records confirming explanation methods correctly represent model behavior, not just plausible narratives
- —Legal review sign-off confirming explanation design satisfies applicable obligations (e.g., GDPR Art. 22)
Implementation Notes
Key steps
- Distinguish between model-level explainability (how the model works in general) and instance-level explainability (why this specific decision was made) — both are needed for high-risk systems.
- For regulated industries, review whether explainability requirements are legally specified (e.g. GDPR Art. 22 requires meaningful information about the logic involved in automated decisions).
- Document explanations in plain language for affected individuals, and in technical detail for auditors — these are different documents.
- Test whether explanations are actually accurate, not just plausible — post hoc explanation methods can be misleading.
Example Implementation
Lender using AI to assist with personal loan decisions, subject to GDPR Art. 22 and FCRA
Explainability Documentation — Personal Loan Scoring System
Model-level documentation (for auditors and regulators):
- Model type: Gradient boosted decision tree (XGBoost v1.7)
- Top predictive features: repayment history (38%), debt-to-income ratio (24%), employment tenure (14%), credit utilization (12%), other (12%)
- Known limitations: reduced accuracy for applicants with < 12 months credit history; documented in model card
- Evaluation results: AUC 0.84, demographic parity gap < 3% across monitored groups (last tested 2026-Q1)
Instance-level explanation (delivered to applicant on request — plain language):
Your application was not approved at this time. The main factors were: (1) your current debt payments represent a high proportion of your monthly income, and (2) two late payments in the past 18 months. These factors suggest a higher risk of repayment difficulty based on patterns in historical loan data.
Explanation delivery process: Applicant may request explanation within 30 days of decision via [support form]; response delivered within 10 business days; explanation reviewed by underwriter before sending
Control Details
- Control ID
- ALC-004
- Domain
- Audit & Logging
- Typical owner
- AI Governance Team / Legal
- Implementation effort
- Medium effort
- Agent-relevant
- No
