Implementation Layer
AI Governance Controls
Operational controls for real-world enterprise AI systems — organized by domain, mapped to regulations, with maturity levels and implementation guidance.
Human Oversight
Review gates, approval workflows, and override mechanisms for AI decisions.
6 controls
AGTAgentic AI
Goal constraints, action boundaries, and escalation paths for autonomous AI agents.
8 controls
SECSecurity
Adversarial input defense, prompt injection protection, and model access controls.
5 controls
ALCAudit & Logging
Immutable records of AI decisions, inputs, outputs, and model versions.
5 controls
CHMChange Management
Model release governance, version rollback, and change approval workflows.
5 controls
DGCData Governance
Training data provenance, privacy controls, and data retention policies.
5 controls
MONMonitoring & Drift
Performance drift detection, anomaly alerting, and operational dashboards.
5 controls
SAFSafety & Reliability
Graceful degradation, fail-safe defaults, and reliability under adversarial inputs.
5 controls
IRCIncident Response
Containment, investigation, and remediation procedures for AI system failures.
5 controls
PRCProcurement
Third-party AI vendor due diligence, contractual obligations, and offboarding.
5 controls
54 controls across 10 domains — select a domain above to filter
Human Oversight
6 controlsAI System Risk Classification
Assign every AI system a risk tier that determines the oversight requirements, review frequency, and documentation standards applied to it.
Human Approval Gate for Consequential AI Decisions
Require a qualified human to review and approve AI-generated recommendations before they produce irreversible or high-stakes outcomes.
AI Output Review Workflow
Define a structured, documented process for reviewing AI outputs before they are acted upon or distributed.
Automation Bias Prevention
Implement measures to detect and counteract the tendency for human reviewers to defer to AI recommendations without adequate critical evaluation.
Reviewer Competency Requirements
Define minimum competency requirements for humans who review, approve, or override AI-generated outputs in high-risk contexts.
Override and Escalation Procedures
Document the procedures, authority levels, and logging requirements when humans reject, modify, or escalate AI-generated decisions.
Agentic AI
8 controlsAgent Permission Boundaries
Apply least-privilege principles to AI agents by explicitly defining and enforcing the tools, APIs, data sources, and actions each agent is authorized to access.
Agent Prompt Injection Defense
Protect AI agents from prompt injection attacks — adversarial instructions embedded in external content that hijack agent behavior.
Agent Memory and Context Governance
Define policies governing what AI agents store in memory or persistent context, how long it is retained, who can access it, and under what conditions it is deleted.
Multi-Agent Trust Hierarchy
Define explicit rules for which agents can instruct, invoke, or delegate authority to other agents in multi-agent systems.
Human Approval Gate for Irreversible Agent Actions
Require explicit human approval before an AI agent takes actions that are difficult or impossible to reverse, such as sending communications, modifying records, executing transactions, or deleting data.
Agent Action Audit Trail
Log every tool call, decision step, memory read/write, and external interaction made by an AI agent so that the full action sequence can be reconstructed after the fact.
Agent Scope and Task Boundaries
Define and enforce the boundaries of what an AI agent is permitted to do, preventing it from expanding its activity beyond its intended purpose.
Agent Environment Isolation
Run AI agents in isolated execution environments that limit their ability to access host systems, network resources, or data beyond what their task requires.
Security
5 controlsPrompt Injection Prevention
Detect and block adversarial inputs designed to override AI system instructions, extract sensitive information, or cause the model to behave in unintended ways.
AI System Access Controls
Apply authentication, authorization, and role-based access controls to AI systems, APIs, and the sensitive data they process.
Sensitive Data Handling in AI Pipelines
Prevent personally identifiable information, credentials, health data, and other sensitive content from entering AI models, prompts, or logs inappropriately.
AI API Credential Management
Securely manage, rotate, and audit API keys and credentials used to access AI services and model providers.
Adversarial Robustness Testing
Systematically test AI systems against adversarial inputs, edge cases, and known attack techniques before deployment and on a recurring basis.
Audit & Logging
5 controlsAI Decision Logging
Record AI system inputs, outputs, model version, confidence scores, and contextual metadata for every decision that affects individuals or business outcomes.
High-Risk AI Audit Trail
Maintain a comprehensive, tamper-evident audit trail for AI systems operating in regulated domains, covering the full lifecycle from input to decision to outcome.
AI Log Retention Policy
Define how long AI decision logs, audit trails, and system logs are retained, in what format, and the procedures for their eventual deletion.
AI Explainability Documentation
Document how AI systems reach decisions in sufficient detail that affected individuals, reviewers, and regulators can understand and challenge outcomes.
Regulatory Audit Readiness
Maintain AI documentation, logs, and governance records in a state that can be produced efficiently in response to a regulatory inquiry or audit.
Change Management
5 controlsAI Model Version Control
Track model versions, configurations, prompts, and deployment history so that any production state can be reproduced and compared.
Model Deployment Gate Process
Require formal approval before new model versions, prompt changes, or configuration updates are deployed to production AI systems.
Model Rollback and Emergency Shutdown
Maintain tested procedures to rapidly revert an AI system to a prior version or disable it entirely in response to detected failures or safety events.
AI Model Change Documentation
Record what changed between model versions, why the change was made, what testing was performed, and who approved the deployment.
Model Deprecation Procedure
Define the process for retiring AI models from production, including notification, data handling, audit trail preservation, and transition planning.
Data Governance
5 controlsTraining Data Provenance
Track and document the origin, composition, licensing, and preprocessing history of data used to train or fine-tune AI models.
PII Handling in AI Systems
Establish controls governing how personally identifiable information is handled when it flows through AI inputs, outputs, training pipelines, and logs.
Data Minimization for AI Systems
Ensure AI systems only process the data strictly necessary for their defined purpose, avoiding unnecessary collection, retention, or use of personal information.
AI Output Retention and Deletion
Define and enforce retention schedules and deletion procedures for AI-generated content, decisions, and the personal data contained within them.
Cross-Border Data Transfer Controls for AI
Govern the international transfer of personal data through AI systems, including data sent to AI API providers, training pipelines, and cloud infrastructure in other jurisdictions.
Monitoring & Drift
5 controlsAI Performance Baseline
Establish documented, quantified performance baselines for production AI systems against which ongoing performance can be compared.
Model Drift Detection
Monitor production AI systems for data drift, concept drift, and output distribution shifts that indicate degraded or changed model behavior.
AI Bias and Fairness Monitoring
Continuously monitor AI system outputs for discriminatory patterns across protected demographic attributes in production.
AI Output Anomaly Detection
Automatically detect unusual, unexpected, or potentially harmful AI outputs in production for investigation and response.
Continuous Model Evaluation
Run ongoing evaluation pipelines against held-out test sets and curated adversarial examples to continuously measure model performance in production.
Safety & Reliability
5 controlsHallucination Detection and Mitigation
Implement controls to detect, reduce, and manage AI-generated factual errors and fabrications before they reach end users or inform decisions.
AI Output Validation
Validate AI-generated outputs against defined quality, safety, and format criteria before they are presented to users or used in downstream processes.
AI Graceful Degradation
Define and implement fallback behavior for AI systems when they are unavailable, underperforming, or producing outputs below acceptable quality thresholds.
AI Reliability Testing
Systematically test AI systems for consistency, repeatability, edge-case handling, and behavior under load before deployment and on a recurring basis.
Harmful Content Filtering
Apply input and output filtering to prevent AI systems from generating or acting on harmful, toxic, illegal, or policy-violating content.
Incident Response
5 controlsAI Incident Classification
Define a taxonomy for AI incidents that categorizes events by type and severity, determining the appropriate response urgency and notification requirements.
AI Incident Response Playbook
Document step-by-step procedures for identifying, containing, investigating, and resolving AI system incidents, including role assignments and escalation paths.
AI Harm Notification Procedures
Define procedures for notifying regulators, affected individuals, and other required parties when an AI system causes or contributes to harm.
AI Post-Incident Review
Conduct a structured review after every significant AI incident to identify root causes, contributing factors, and systemic improvements.
AI Incident Log and Tracking
Maintain a centralized, structured log of all AI incidents, near-misses, and governance concerns, accessible to the AI governance function.
Procurement
5 controlsAI Vendor Due Diligence
Assess AI vendors against security, governance, and compliance criteria before procurement and at defined intervals during the vendor relationship.
AI Contractual Requirements
Define minimum contractual provisions that must be present in agreements with AI vendors, covering data handling, transparency, audit rights, and incident notification.
Third-Party AI Model Evaluation
Evaluate third-party AI models against defined performance, safety, and bias criteria before deploying them in enterprise workflows.
Vendor AI Incident Notification Requirements
Require AI vendors to notify the organization of incidents affecting their AI systems within defined timeframes and with specified information.
AI Procurement Risk Assessment
Assess and document the risks of procuring an AI system or service before approval, including technical, legal, privacy, and operational risks.
