Question 1 of 34
How do we inventory and classify AI systems by risk level?
Published by AI Governance Institute · Practical Governance for Enterprise AI
A framework for cataloging all AI tools in use, including shadow AI, and assessing risk based on data sensitivity, decision impact, and regulatory exposure.
If you only do 3 things, do this:
- 1.Run a vendor contract audit, employee survey, AND network scan in parallel — shadow AI won't surface from just one method.
- 2.Score every system on three axes: data sensitivity, decision impact, and regulatory exposure. Map these to a risk tier before you try to govern anything.
- 3.Assign a named owner and a review date to each catalogued system. Without ownership, the inventory will be out of date within six months.
The Situation
Who this is for: GRC teams, risk managers, or compliance leads launching an AI governance program
When you need this: Before any formal compliance work begins, or when a regulator asks what AI systems your organization operates
The Decision
Which AI systems need immediate governance attention, and what level of oversight does each one require?
The Steps
- 1Pull all vendor contracts and flag any that include AI features, model access, or "intelligent" functionality
- 2Send an employee survey offering amnesty for undisclosed tool use, with a clear path to get tools approved
- 3Ask IT security to monitor outbound traffic to known AI API endpoints (OpenAI, Anthropic, Google, Cohere)
- 4Consolidate results into a working inventory spreadsheet
- 5Score each system: 1–3 on data sensitivity, 1–3 on decision impact, 1–3 on regulatory exposure
- 6Assign a risk tier based on aggregate score, then assign a named owner and next review date to each row
The Artifacts
- —AI inventory spreadsheet template (fields: name, owner, vendor, use case, data processed, risk tier, regulatory frameworks, last reviewed)
- —Risk classification scoring matrix (data sensitivity × decision impact × regulatory exposure)
- —Employee survey template with amnesty language
- —Vendor AI capability checklist for procurement reviews
The Output
A complete AI inventory with a risk tier assigned to every system, each owned by a named individual, with a quarterly review schedule in place.
Start with discovery, not policy
Most organizations underestimate how many AI systems are already in use. Before you can govern AI, you need to know what you have. That means a structured discovery process that goes beyond IT-approved tools to capture shadow AI: models accessed through personal accounts, vendor-embedded AI features, and department-level subscriptions that bypassed procurement.
Effective discovery combines three inputs: a vendor contract review to identify AI capabilities in existing software, an employee survey asking teams what AI tools they use in their workflows, and a network-level audit to surface unauthorized API calls to large language model providers. No single method captures everything. Run all three.
Build a risk classification matrix
Once you have an inventory, apply a consistent risk classification. The EU AI Act provides a useful four-tier model: unacceptable risk (prohibited), high risk (strict requirements), limited risk (transparency obligations), and minimal risk (no specific obligations). For organizations not yet subject to the EU AI Act, this framework still provides a practical starting point.
Classify each system across three dimensions. First, data sensitivity: does the system process personal data, health information, financial records, or other regulated categories? Second, decision impact: does the system influence consequential decisions about individuals, such as hiring, credit, benefits, or access to services? Third, regulatory exposure: is the system operating in a sector with specific AI rules, such as financial services, healthcare, or employment?
Map these three dimensions to produce a risk tier for each system. High scores across all three categories indicate high-risk systems requiring the most rigorous oversight. The classification should be reviewed annually and whenever a system is updated or its use case changes.
Maintain a living inventory
An AI inventory is not a one-time exercise. Assign ownership to each catalogued system, document its risk tier, and establish a review cadence. New AI deployments should trigger a classification review before they go live, not after. Procurement and vendor management processes should include a checkpoint that asks whether a new tool incorporates AI capabilities.
Store your inventory in a system that allows you to filter by risk tier, business unit, data type, and regulatory framework. When regulators ask what AI systems you operate and how you oversee them, a well-maintained inventory is the foundation of your answer.
Governance Controls
Operational controls that implement the guidance in this playbook.
