Question 1 of 24
How do we inventory and classify AI systems by risk level?
A framework for cataloging all AI tools in use, including shadow AI, and assessing risk based on data sensitivity, decision impact, and regulatory exposure.
Start with discovery, not policy
Most organizations underestimate how many AI systems are already in use. Before you can govern AI, you need to know what you have. That means a structured discovery process that goes beyond IT-approved tools to capture shadow AI: models accessed through personal accounts, vendor-embedded AI features, and department-level subscriptions that bypassed procurement.
Effective discovery combines three inputs: a vendor contract review to identify AI capabilities in existing software, an employee survey asking teams what AI tools they use in their workflows, and a network-level audit to surface unauthorized API calls to large language model providers. No single method captures everything. Run all three.
Build a risk classification matrix
Once you have an inventory, apply a consistent risk classification. The EU AI Act provides a useful four-tier model: unacceptable risk (prohibited), high risk (strict requirements), limited risk (transparency obligations), and minimal risk (no specific obligations). For organizations not yet subject to the EU AI Act, this framework still provides a practical starting point.
Classify each system across three dimensions. First, data sensitivity: does the system process personal data, health information, financial records, or other regulated categories? Second, decision impact: does the system influence consequential decisions about individuals, such as hiring, credit, benefits, or access to services? Third, regulatory exposure: is the system operating in a sector with specific AI rules, such as financial services, healthcare, or employment?
Map these three dimensions to produce a risk tier for each system. High scores across all three categories indicate high-risk systems requiring the most rigorous oversight. The classification should be reviewed annually and whenever a system is updated or its use case changes.
Maintain a living inventory
An AI inventory is not a one-time exercise. Assign ownership to each catalogued system, document its risk tier, and establish a review cadence. New AI deployments should trigger a classification review before they go live, not after. Procurement and vendor management processes should include a checkpoint that asks whether a new tool incorporates AI capabilities.
Store your inventory in a system that allows you to filter by risk tier, business unit, data type, and regulatory framework. When regulators ask what AI systems you operate and how you oversee them, a well-maintained inventory is the foundation of your answer.
