AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

← AI Governance Playbook

Question 14 of 24

What is our explainability standard for AI decisions?

Defining how much transparency is required at different risk levels, and building the technical and procedural infrastructure to deliver it.

Explainability is not one-size-fits-all

The appropriate level of explainability depends on the stakes of the decision, the regulatory requirements that apply, and the audience who needs the explanation. A data scientist debugging model behavior needs a different kind of explanation than a customer asking why their application was denied. Both needs are legitimate and require different technical and procedural approaches.

Build your explainability standard around a risk-tiered framework. For minimal-risk systems, basic documentation of the model's purpose and general behavior may be sufficient. For limited-risk systems, transparency about the fact that AI is being used and its general logic is typically required. For high-risk systems, full technical documentation, audit logs, and the ability to provide individual-level explanations are necessary.

Technical approaches to explainability

For models where post-hoc explanation is required, SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) are the most widely adopted techniques. Both provide feature importance scores that describe which inputs most influenced a specific prediction. These techniques work with most model types but have limitations, particularly for complex models where the explanation approximation may not accurately represent the model's actual reasoning.

For the highest-risk applications, consider whether the use case can be served by an inherently interpretable model rather than requiring post-hoc explanation. Logistic regression, decision trees, and rule-based systems are natively explainable. The accuracy gap between these approaches and deep learning has narrowed for many structured data applications, and the legal and operational benefits of interpretability can outweigh marginal accuracy differences.

Delivering explanations to affected individuals

GDPR Article 22 and the FCRA both require that individuals who are subject to automated decisions receive meaningful explanations. "Meaningful" has not been precisely defined by courts or regulators, but the emerging standard is that the explanation must be specific enough for the individual to understand and potentially contest the decision.

Build explanation delivery into your decision workflow, not as an afterthought. For credit decisions, this means FCRA-compliant adverse action notices that identify the principal reasons for the decision in plain language. For employment decisions, it means documentation that can be provided if a candidate requests an explanation. For customer decisions, it means a customer service process that can retrieve and communicate the relevant information.