35 Playbooks
How to run AI governance at scale
A practical guide for compliance officers, general counsel, GRC teams, and risk managers navigating the operational realities of enterprise AI governance. Questions every compliance team needs to answer.
How do we inventory and classify AI systems by risk level?
A framework for cataloging all AI tools in use, including shadow AI, and assessing risk based on data sensitivity, decision impact, and regulatory exposure.
Who owns AI governance within the organization?
Defining accountability for AI governance, whether in Legal, IT, Risk, or a dedicated AI ethics function, and establishing clear escalation paths.
How do we ensure third-party AI vendors meet our standards?
Extending vendor due diligence to cover model transparency, data handling, bias testing, and contractual liability for AI outputs.
What are our obligations under emerging AI regulations?
Tracking the EU AI Act, U.S. executive orders, SEC guidance, and sector-specific rules to understand what AI compliance actually requires.
How do we detect and mitigate algorithmic bias?
Testing protocols and audit trails for AI used in hiring, lending, or customer decisions, to defend against discrimination claims.
What does meaningful human oversight look like for high-risk AI decisions?
Defining what "in the loop" means in practice, what level of review satisfies regulatory standards, and how to document it.
How do we handle AI-generated content and hallucinations?
Defining responsibility when AI produces inaccurate outputs used in contracts, reports, or customer communications, and the controls that prevent harm.
How should employees be trained on acceptable AI use?
Covering what tools are approved, what data can be input, and how to handle AI-assisted work product in regulated industries.
How do we maintain data privacy compliance when using AI?
Addressing training data sourcing, data minimization, cross-border transfers, and the right to explanation under GDPR and CCPA.
How do we document AI decision-making for auditability?
Meeting regulatory and litigation demands for explainability through logging, version control, and model cards.
How do we ensure human-in-the-loop review is actually effective?
Moving beyond checkbox approval to build oversight protocols that counter automation bias and give reviewers genuine authority to override AI decisions.
Is our training data compliant with global privacy laws?
Ensuring you had the right to use data for model training, identifying PII in datasets, and navigating GDPR and EU AI Act data obligations.
How do we measure and mitigate algorithmic bias?
Standardized metrics for testing whether a model unfairly discriminates against protected groups, and processes for remediation when bias is found.
What is our explainability standard for AI decisions?
Defining how much transparency is required at different risk levels, and building the technical and procedural infrastructure to deliver it.
How are we managing third-party AI risks?
Governing the use of external AI APIs and vendor-embedded models, including data handling, documentation requirements, and ongoing monitoring.
Do we have a complete AI inventory?
Building and maintaining a centralized registry of every AI tool in use, including shadow AI discovered through procurement, network, and employee channels.
How does the EU AI Act affect our global operations?
Understanding the Brussels Effect on non-EU organizations, and evaluating whether to adopt the EU risk-based framework as a global internal standard.
What is our process for model drift monitoring?
Defining ownership and cadence for ongoing monitoring of deployed AI models to detect performance degradation, behavioral shifts, and emerging bias after deployment.
How do we handle intellectual property and copyright in AI?
Navigating ownership of AI-generated content, copyright exposure from training data, and the contractual protections needed for AI-assisted work product.
Is our AI red-teaming rigorous enough?
Defining pass/fail criteria for adversarial testing of high-risk AI systems before deployment, covering toxicity, data leakage, jailbreaking, and misuse scenarios.
How do we govern AI agents that take autonomous actions?
Agentic AI systems that can browse the web, execute code, send messages, and interact with external services require governance controls that traditional policy frameworks were never designed to handle.
How do we apply a three lines of defense model to AI risk?
The three lines of defense model translates directly to AI governance, with first-line business ownership, second-line risk oversight, and third-line independent assurance each requiring AI-specific adaptations.
How do we build and maintain an AI model registry?
A model registry is the operational backbone of AI governance: it tracks what models are in production, who owns them, what data they were trained on, what their risk classification is, and when they were last reviewed.
What does audit-ready AI documentation look like in practice?
Organizations facing regulatory scrutiny, board inquiries, or litigation need to produce evidence that AI systems were built, deployed, and monitored responsibly. Audit readiness is not a documentation exercise. It is an evidence-management discipline built into the AI lifecycle.
How do we comply with the EU AI Act?
A step-by-step compliance guide covering risk tier classification, high-risk system obligations, GPAI model requirements, and the phased enforcement timeline.
What does AI governance look like for a company with under 50 employees?
A lean governance framework for startups that covers the essentials without the overhead — focused on what actually protects you at an early stage.
How do we perform an AI risk assessment?
A practical methodology for assessing AI system risk before deployment, covering data sensitivity, decision impact, regulatory exposure, and the controls each risk level requires.
What AI regulations apply to a US-based SaaS company?
Mapping the federal, state, and international AI regulatory requirements that apply to US SaaS companies offering AI features, based on use case and customer location.
How do we build an AI governance program from scratch?
A sequenced guide to standing up an AI governance program — from initial inventory through ongoing operations — for organizations that are starting with nothing.
What AI documentation do we actually need?
A practical guide to which AI documentation is legally required, which is best practice, and which is unnecessary overhead — organized by risk tier.
How do we audit an AI system for compliance?
A methodology for conducting compliance audits of individual AI systems — what to review, what evidence to collect, and how to write findings that actually drive remediation.
How do we manage third-party AI vendors safely throughout the vendor lifecycle?
End-to-end guidance for managing external AI vendor relationships from initial due diligence through ongoing monitoring, covering data handling, contractual protections, and what to do when vendors change their models.
What do we do when an AI system causes harm or fails?
A structured incident response process for AI failures — from initial detection through containment, root cause investigation, regulatory notification, and prevention.
How do we prepare for AI regulation over the next 12 months?
A forward-looking compliance planning guide: identifying what regulations become enforceable in your jurisdictions over the next year, assessing your current gaps, and building a funded remediation roadmap.
