Implementation Layer
AI Governance Controls
Operational controls for real-world enterprise AI systems — organized by domain, mapped to regulations, with maturity levels and implementation guidance.
Human Oversight
Review gates, approval workflows, and override mechanisms for AI decisions.
6 controls
AGTAgentic AI
Goal constraints, action boundaries, and escalation paths for autonomous AI agents.
8 controls
SECSecurity
Adversarial input defense, prompt injection protection, and model access controls.
5 controls
ALCAudit & Logging
Immutable records of AI decisions, inputs, outputs, and model versions.
5 controls
CHMChange Management
Model release governance, version rollback, and change approval workflows.
5 controls
DGCData Governance
Training data provenance, privacy controls, and data retention policies.
5 controls
MONMonitoring & Drift
Performance drift detection, anomaly alerting, and operational dashboards.
5 controls
SAFSafety & Reliability
Graceful degradation, fail-safe defaults, and reliability under adversarial inputs.
5 controls
IRCIncident Response
Containment, investigation, and remediation procedures for AI system failures.
5 controls
PRCProcurement
Third-party AI vendor due diligence, contractual obligations, and offboarding.
5 controls
32 controls matching filters
Human Oversight
4 controlsAI System Risk Classification
Assign every AI system a risk tier that determines the oversight requirements, review frequency, and documentation standards applied to it.
Human Approval Gate for Consequential AI Decisions
Require a qualified human to review and approve AI-generated recommendations before they produce irreversible or high-stakes outcomes.
Automation Bias Prevention
Implement measures to detect and counteract the tendency for human reviewers to defer to AI recommendations without adequate critical evaluation.
Reviewer Competency Requirements
Define minimum competency requirements for humans who review, approve, or override AI-generated outputs in high-risk contexts.
Agentic AI
5 controlsAgent Prompt Injection Defense
Protect AI agents from prompt injection attacks — adversarial instructions embedded in external content that hijack agent behavior.
Agent Memory and Context Governance
Define policies governing what AI agents store in memory or persistent context, how long it is retained, who can access it, and under what conditions it is deleted.
Human Approval Gate for Irreversible Agent Actions
Require explicit human approval before an AI agent takes actions that are difficult or impossible to reverse, such as sending communications, modifying records, executing transactions, or deleting data.
Agent Action Audit Trail
Log every tool call, decision step, memory read/write, and external interaction made by an AI agent so that the full action sequence can be reconstructed after the fact.
Agent Scope and Task Boundaries
Define and enforce the boundaries of what an AI agent is permitted to do, preventing it from expanding its activity beyond its intended purpose.
Security
3 controlsPrompt Injection Prevention
Detect and block adversarial inputs designed to override AI system instructions, extract sensitive information, or cause the model to behave in unintended ways.
AI System Access Controls
Apply authentication, authorization, and role-based access controls to AI systems, APIs, and the sensitive data they process.
Sensitive Data Handling in AI Pipelines
Prevent personally identifiable information, credentials, health data, and other sensitive content from entering AI models, prompts, or logs inappropriately.
Audit & Logging
3 controlsAI Decision Logging
Record AI system inputs, outputs, model version, confidence scores, and contextual metadata for every decision that affects individuals or business outcomes.
AI Explainability Documentation
Document how AI systems reach decisions in sufficient detail that affected individuals, reviewers, and regulators can understand and challenge outcomes.
Regulatory Audit Readiness
Maintain AI documentation, logs, and governance records in a state that can be produced efficiently in response to a regulatory inquiry or audit.
Change Management
2 controlsModel Deployment Gate Process
Require formal approval before new model versions, prompt changes, or configuration updates are deployed to production AI systems.
Model Rollback and Emergency Shutdown
Maintain tested procedures to rapidly revert an AI system to a prior version or disable it entirely in response to detected failures or safety events.
Data Governance
3 controlsPII Handling in AI Systems
Establish controls governing how personally identifiable information is handled when it flows through AI inputs, outputs, training pipelines, and logs.
Data Minimization for AI Systems
Ensure AI systems only process the data strictly necessary for their defined purpose, avoiding unnecessary collection, retention, or use of personal information.
AI Output Retention and Deletion
Define and enforce retention schedules and deletion procedures for AI-generated content, decisions, and the personal data contained within them.
Monitoring & Drift
3 controlsModel Drift Detection
Monitor production AI systems for data drift, concept drift, and output distribution shifts that indicate degraded or changed model behavior.
AI Output Anomaly Detection
Automatically detect unusual, unexpected, or potentially harmful AI outputs in production for investigation and response.
Continuous Model Evaluation
Run ongoing evaluation pipelines against held-out test sets and curated adversarial examples to continuously measure model performance in production.
Safety & Reliability
4 controlsAI Output Validation
Validate AI-generated outputs against defined quality, safety, and format criteria before they are presented to users or used in downstream processes.
AI Graceful Degradation
Define and implement fallback behavior for AI systems when they are unavailable, underperforming, or producing outputs below acceptable quality thresholds.
AI Reliability Testing
Systematically test AI systems for consistency, repeatability, edge-case handling, and behavior under load before deployment and on a recurring basis.
Harmful Content Filtering
Apply input and output filtering to prevent AI systems from generating or acting on harmful, toxic, illegal, or policy-violating content.
Incident Response
2 controlsAI Incident Response Playbook
Document step-by-step procedures for identifying, containing, investigating, and resolving AI system incidents, including role assignments and escalation paths.
AI Harm Notification Procedures
Define procedures for notifying regulators, affected individuals, and other required parties when an AI system causes or contributes to harm.
Procurement
3 controlsAI Vendor Due Diligence
Assess AI vendors against security, governance, and compliance criteria before procurement and at defined intervals during the vendor relationship.
AI Contractual Requirements
Define minimum contractual provisions that must be present in agreements with AI vendors, covering data handling, transparency, audit rights, and incident notification.
AI Procurement Risk Assessment
Assess and document the risks of procuring an AI system or service before approval, including technical, legal, privacy, and operational risks.
