AI Governance Institute logo
AI Governance Institute

aigovernance.com — Global AI Regulation & Framework Directory

← AI Governance Playbook

Question 10 of 24

How do we document AI decision-making for auditability?

Meeting regulatory and litigation demands for explainability through logging, version control, and model cards.

Explainability is a legal requirement, not just a best practice

The ability to explain AI decisions is increasingly required by law. GDPR Article 22 requires meaningful information about the logic of automated decisions. The EU AI Act requires technical documentation and logging for high-risk systems. The FCRA requires adverse action notices that explain the principal reasons for adverse credit decisions, which extends to AI-driven credit models.

Beyond regulatory requirements, explainability is a litigation necessity. In employment discrimination cases, lenders' fair lending examinations, and consumer protection investigations, regulators and opposing counsel will ask why the system made a specific decision about a specific individual. "The model is a black box" is not an acceptable answer.

What to log and retain

For each AI-assisted decision, log: the inputs provided to the model, the version of the model used, the output or recommendation produced, the confidence score or probability if available, the human reviewer involved (if any), and the final decision taken. Timestamps and unique identifiers should allow you to reconstruct the full decision record from any of its components.

Version control your models and their configurations. A model update that changes behavior is a material change that should be documented, tested, and logged. If a regulatory inquiry focuses on decisions made in a specific time period, you need to be able to identify which version of the model was running, with what parameters, and on what data.

Model cards and system documentation

Model cards, structured documents describing a model's intended use, performance characteristics, limitations, and bias evaluation results, are becoming a standard tool for AI governance documentation. They were introduced by Google researchers in 2019 and have been widely adopted as a format for communicating what a model does and does not do.

Maintain model cards for all internally developed or fine-tuned AI systems, and request them from vendors for externally sourced models. A model card that has not been updated since deployment is a liability rather than an asset. Build model card maintenance into your model change management process so documentation stays current.

For the most sensitive applications, consider whether you need interpretable models, such as logistic regression or decision trees, rather than high-accuracy black-box models. The accuracy trade-off of interpretable models has narrowed considerably. In contexts where explainability is legally required and the stakes are high, interpretability may be worth more than marginal accuracy gains.