Implementation Layer
AI Governance Controls
Operational controls for real-world enterprise AI systems — organized by domain, mapped to regulations, with maturity levels and implementation guidance.
Human Oversight
Review gates, approval workflows, and override mechanisms for AI decisions.
6 controls
AGTAgentic AI
Goal constraints, action boundaries, and escalation paths for autonomous AI agents.
8 controls
SECSecurity
Adversarial input defense, prompt injection protection, and model access controls.
5 controls
ALCAudit & Logging
Immutable records of AI decisions, inputs, outputs, and model versions.
5 controls
CHMChange Management
Model release governance, version rollback, and change approval workflows.
5 controls
DGCData Governance
Training data provenance, privacy controls, and data retention policies.
5 controls
MONMonitoring & Drift
Performance drift detection, anomaly alerting, and operational dashboards.
5 controls
SAFSafety & Reliability
Graceful degradation, fail-safe defaults, and reliability under adversarial inputs.
5 controls
IRCIncident Response
Containment, investigation, and remediation procedures for AI system failures.
5 controls
PRCProcurement
Third-party AI vendor due diligence, contractual obligations, and offboarding.
5 controls
35 controls matching filters
Human Oversight
4 controlsAI System Risk Classification
Assign every AI system a risk tier that determines the oversight requirements, review frequency, and documentation standards applied to it.
Human Approval Gate for Consequential AI Decisions
Require a qualified human to review and approve AI-generated recommendations before they produce irreversible or high-stakes outcomes.
Reviewer Competency Requirements
Define minimum competency requirements for humans who review, approve, or override AI-generated outputs in high-risk contexts.
Override and Escalation Procedures
Document the procedures, authority levels, and logging requirements when humans reject, modify, or escalate AI-generated decisions.
Agentic AI
8 controlsAgent Permission Boundaries
Apply least-privilege principles to AI agents by explicitly defining and enforcing the tools, APIs, data sources, and actions each agent is authorized to access.
Agent Prompt Injection Defense
Protect AI agents from prompt injection attacks — adversarial instructions embedded in external content that hijack agent behavior.
Agent Memory and Context Governance
Define policies governing what AI agents store in memory or persistent context, how long it is retained, who can access it, and under what conditions it is deleted.
Multi-Agent Trust Hierarchy
Define explicit rules for which agents can instruct, invoke, or delegate authority to other agents in multi-agent systems.
Human Approval Gate for Irreversible Agent Actions
Require explicit human approval before an AI agent takes actions that are difficult or impossible to reverse, such as sending communications, modifying records, executing transactions, or deleting data.
Agent Action Audit Trail
Log every tool call, decision step, memory read/write, and external interaction made by an AI agent so that the full action sequence can be reconstructed after the fact.
Agent Scope and Task Boundaries
Define and enforce the boundaries of what an AI agent is permitted to do, preventing it from expanding its activity beyond its intended purpose.
Agent Environment Isolation
Run AI agents in isolated execution environments that limit their ability to access host systems, network resources, or data beyond what their task requires.
Security
5 controlsPrompt Injection Prevention
Detect and block adversarial inputs designed to override AI system instructions, extract sensitive information, or cause the model to behave in unintended ways.
AI System Access Controls
Apply authentication, authorization, and role-based access controls to AI systems, APIs, and the sensitive data they process.
Sensitive Data Handling in AI Pipelines
Prevent personally identifiable information, credentials, health data, and other sensitive content from entering AI models, prompts, or logs inappropriately.
AI API Credential Management
Securely manage, rotate, and audit API keys and credentials used to access AI services and model providers.
Adversarial Robustness Testing
Systematically test AI systems against adversarial inputs, edge cases, and known attack techniques before deployment and on a recurring basis.
Audit & Logging
2 controlsAI Decision Logging
Record AI system inputs, outputs, model version, confidence scores, and contextual metadata for every decision that affects individuals or business outcomes.
High-Risk AI Audit Trail
Maintain a comprehensive, tamper-evident audit trail for AI systems operating in regulated domains, covering the full lifecycle from input to decision to outcome.
Change Management
3 controlsAI Model Version Control
Track model versions, configurations, prompts, and deployment history so that any production state can be reproduced and compared.
Model Deployment Gate Process
Require formal approval before new model versions, prompt changes, or configuration updates are deployed to production AI systems.
Model Rollback and Emergency Shutdown
Maintain tested procedures to rapidly revert an AI system to a prior version or disable it entirely in response to detected failures or safety events.
Data Governance
2 controlsPII Handling in AI Systems
Establish controls governing how personally identifiable information is handled when it flows through AI inputs, outputs, training pipelines, and logs.
AI Output Retention and Deletion
Define and enforce retention schedules and deletion procedures for AI-generated content, decisions, and the personal data contained within them.
Monitoring & Drift
1 controlSafety & Reliability
5 controlsHallucination Detection and Mitigation
Implement controls to detect, reduce, and manage AI-generated factual errors and fabrications before they reach end users or inform decisions.
AI Output Validation
Validate AI-generated outputs against defined quality, safety, and format criteria before they are presented to users or used in downstream processes.
AI Graceful Degradation
Define and implement fallback behavior for AI systems when they are unavailable, underperforming, or producing outputs below acceptable quality thresholds.
AI Reliability Testing
Systematically test AI systems for consistency, repeatability, edge-case handling, and behavior under load before deployment and on a recurring basis.
Harmful Content Filtering
Apply input and output filtering to prevent AI systems from generating or acting on harmful, toxic, illegal, or policy-violating content.
Incident Response
5 controlsAI Incident Classification
Define a taxonomy for AI incidents that categorizes events by type and severity, determining the appropriate response urgency and notification requirements.
AI Incident Response Playbook
Document step-by-step procedures for identifying, containing, investigating, and resolving AI system incidents, including role assignments and escalation paths.
AI Harm Notification Procedures
Define procedures for notifying regulators, affected individuals, and other required parties when an AI system causes or contributes to harm.
AI Post-Incident Review
Conduct a structured review after every significant AI incident to identify root causes, contributing factors, and systemic improvements.
AI Incident Log and Tracking
Maintain a centralized, structured log of all AI incidents, near-misses, and governance concerns, accessible to the AI governance function.
