Question 14 of 34
What is our explainability standard for AI decisions?
Published by AI Governance Institute · Practical Governance for Enterprise AI
Defining how much transparency is required at different risk levels, and building the technical and procedural infrastructure to deliver it.
If you only do 3 things, do this:
- 1.Define your explainability standard by risk tier before deployment — you can't retrofit explainability into a production model at the scale needed for regulatory or legal response.
- 2.SHAP or LIME will cover most post-hoc explanation needs for structured data models. For the highest-risk applications, evaluate whether an inherently interpretable model is more appropriate.
- 3.Build explanation delivery into your decision workflow now. GDPR Article 22 and the FCRA both require individuals receive meaningful explanations for automated decisions.
The Situation
Who this is for: Data science, legal, and compliance teams designing or auditing AI decision systems
When you need this: When selecting a model architecture, designing the decision workflow, or preparing for regulatory examination
The Decision
What level of explainability is required for each AI system, and do we have the technical and procedural infrastructure to deliver it?
The Steps
- 1Apply your risk-tier framework to determine the required explainability level for each system
- 2For high-risk systems, evaluate whether an inherently interpretable model is feasible vs. post-hoc explanation
- 3For systems requiring post-hoc explanation, implement SHAP or LIME and validate that explanations accurately represent model behavior
- 4Design the explanation delivery mechanism for each audience: customers, regulators, internal reviewers
- 5Build explanation generation into the decision workflow, not as a separate process run on demand
- 6Test explanation delivery against GDPR Article 22 and FCRA adverse action requirements as applicable
The Artifacts
- —Explainability requirements matrix (system × risk tier × required explanation type)
- —SHAP/LIME implementation guide for your model stack
- —Customer-facing explanation template (plain language, adverse action format)
- —Regulator/auditor explanation documentation template
- —Explainability validation checklist (are explanations accurate representations of model reasoning?)
The Output
A documented explainability approach for every AI system, with explanation delivery mechanisms built into workflows and validated against applicable legal requirements.
Explainability is not one-size-fits-all
The appropriate level of explainability depends on the stakes of the decision, the regulatory requirements that apply, and the audience who needs the explanation. A data scientist debugging model behavior needs a different kind of explanation than a customer asking why their application was denied. Both needs are legitimate and require different technical and procedural approaches.
Build your explainability standard around a risk-tiered framework. For minimal-risk systems, basic documentation of the model's purpose and general behavior may be sufficient. For limited-risk systems, transparency about the fact that AI is being used and its general logic is typically required. For high-risk systems, full technical documentation, audit logs, and the ability to provide individual-level explanations are necessary.
Technical approaches to explainability
For models where post-hoc explanation is required, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are the most widely adopted techniques. Both provide feature importance scores that describe which inputs most influenced a specific prediction. These techniques work with most model types but have limitations, particularly for complex models where the explanation approximation may not accurately represent the model's actual reasoning.
For the highest-risk applications, consider whether the use case can be served by an inherently interpretable model rather than requiring post-hoc explanation. Logistic regression, decision trees, and rule-based systems are natively explainable. The accuracy gap between these approaches and deep learning has narrowed for many structured data applications, and the legal and operational benefits of interpretability can outweigh marginal accuracy differences.
Delivering explanations to affected individuals
GDPR Article 22 and the FCRA both require that individuals who are subject to automated decisions receive meaningful explanations. "Meaningful" has not been precisely defined by courts or regulators, but the emerging standard is that the explanation must be specific enough for the individual to understand and potentially contest the decision.
Build explanation delivery into your decision workflow, not as an afterthought. For credit decisions, this means FCRA-compliant adverse action notices that identify the principal reasons for the decision in plain language. For employment decisions, it means documentation that can be provided if a candidate requests an explanation. For customer decisions, it means a customer service process that can retrieve and communicate the relevant information.
Governance Controls
Operational controls that implement the guidance in this playbook.
