High-Risk AI Audit Trail
Maintain a comprehensive, tamper-evident audit trail for AI systems operating in regulated domains, covering the full lifecycle from input to decision to outcome.
Objective
Provide regulators, auditors, and affected individuals with verifiable evidence of how AI-driven decisions were made and on what basis.
Maturity Levels
Initial
No audit trail exists for high-risk AI decisions.
Developing
Partial audit records exist but are not tamper-evident and do not cover the full decision lifecycle.
Defined
A complete audit trail is maintained covering inputs, model selection, outputs, human review steps, and final outcomes.
Managed
Audit trail integrity is verified periodically; completeness gaps are tracked and remediated.
Optimizing
Audit trail generation is automated and tested; records meet documented regulatory requirements validated by legal.
Evidence Requirements
What an auditor or assessor would expect to see for this control.
- —Complete audit trail record for a sample of decisions, confirming all required components are present (AI output, input context, human review, outcome)
- —WORM or equivalent tamper-evident storage configuration evidence with access control documentation
- —Integrity verification records showing periodic hash checks were run and passed
- —Quarterly reconstruction exercise records demonstrating sampled decisions can be fully reconstructed from audit records
- —Regulatory requirement mapping confirming audit trail design satisfies each applicable legal obligation
Implementation Notes
Key steps
- Scope 'high-risk' using your risk classification (HOC-001) — audit trail requirements should be proportionate to system risk tier.
- Include the human review record alongside the AI output: who reviewed, when, what they saw, and what decision they made.
- Store audit records separately from operational logs with stricter access controls and longer retention periods.
- Test audit trail completeness before regulatory exams, not during — run a dry-run reconstruction of a sample of past decisions.
Example Implementation
EU-regulated financial firm using AI for automated trading signals reviewed by human traders
High-Risk AI Audit Trail — Trading Signal System
Audit record components (per decision):
- AI output record — signal type, asset, confidence, model version, timestamp
- Input context record — market data snapshot hash, prompt version, retrieved context references
- Human review record — trader ID, review timestamp, decision (execute / modify / reject), rationale code
- Outcome record — whether trade was executed, execution details, outcome (appended post-execution)
Tamper-evidence: All records written to WORM (write-once read-many) storage; SHA-256 hash of each record stored in a separate integrity log
Access controls: Read access: Compliance, Legal, Regulators (on request). No write or delete access for any operational role.
Retention: 10 years from decision date (EU AI Act Art. 12; MiFID II record-keeping requirement)
Pre-audit test: Compliance team runs quarterly reconstruction exercise — selects 10 random past decisions and verifies all four record components are present and consistent
Control Details
- Control ID
- ALC-002
- Domain
- Audit & Logging
- Typical owner
- Compliance / AI Governance Team
- Implementation effort
- High effort
- Agent-relevant
- Yes
