AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← AI Governance Playbook

Question 25 of 34

How do we comply with the EU AI Act?

A step-by-step compliance guide covering risk tier classification, high-risk system obligations, GPAI model requirements, and the phased enforcement timeline.

If you only do 3 things, do this:

  1. 1.Map every AI system against the EU AI Act's four risk tiers. Annex III lists exactly which use cases are high-risk — check each system against it before deciding it's low-risk.
  2. 2.High-risk system obligations are concrete: technical documentation, conformity assessment, post-market monitoring, human oversight arrangements, and EU database registration. Start with documentation.
  3. 3.Phased deadlines are real. GPAI obligations applied August 2025. High-risk system obligations apply August 2026. If you haven't started, you're already behind for GPAI.

The Situation

Who this is for: Any organization placing AI systems on the EU market, affecting EU residents, or deploying AI with EU-nexus output

When you need this: When assessing EU AI Act obligations, building a compliance roadmap, or approaching a compliance deadline

The Decision

Which of our AI systems are high-risk under the EU AI Act, what do those systems require, and can we meet the compliance deadlines?

The Steps

  1. 1Determine if the EU AI Act applies: do your systems place outputs on the EU market or affect EU residents?
  2. 2Map each system against the four risk tiers: prohibited, high-risk (Annex III), limited-risk (transparency), minimal-risk
  3. 3For prohibited-use systems, cease or redesign immediately
  4. 4For high-risk systems: prepare technical documentation per Annex IV, implement quality management system, conduct conformity assessment, register in EU AI database
  5. 5For GPAI models: assess systemic risk threshold (>10²⁵ FLOPs), implement transparency and copyright obligations
  6. 6For limited-risk systems: implement disclosure obligations (chatbot disclosure, deepfake labeling)
  7. 7Appoint an EU representative if required for non-EU organizations; ensure supply chain compliance representations from providers

The Artifacts

  • EU AI Act risk tier classification worksheet (Annex III checklist by use case)
  • High-risk system technical documentation template (Annex IV)
  • Conformity assessment checklist (self-assessment for most Annex III systems)
  • EU AI database registration checklist
  • GPAI compliance checklist (transparency, copyright, systemic risk)
  • EU AI Act compliance timeline (obligations and deadlines by tier)

The Output

A documented risk tier for every AI system, conformity assessments complete for high-risk systems, disclosure mechanisms for limited-risk systems, and a compliance calendar tracking all obligations against deadlines.

Understanding the risk tiers

The EU AI Act uses four risk tiers. Unacceptable risk systems are prohibited outright — these include AI that deploys subliminal manipulation, exploits vulnerable groups, or is used for real-time remote biometric identification in public spaces outside narrow law enforcement exceptions. If any system falls here, it must be ceased or fundamentally redesigned.

High-risk AI systems are defined by Annex III of the Act and cover eight domains: critical infrastructure, education and vocational training, employment, essential private and public services (including credit and insurance), law enforcement, migration and asylum, administration of justice, and democratic processes. Any AI that significantly influences decisions in these areas must meet the Act's full compliance requirements.

Limited-risk systems face transparency obligations only. Chatbots must disclose they are AI. Deepfakes must be labeled. Emotion recognition systems must disclose their use. These requirements are immediate and relatively straightforward to implement. Minimal-risk systems — the vast majority of AI applications — face no specific EU AI Act obligations.

What high-risk compliance requires

High-risk AI systems must have technical documentation prepared before market placement, covering system design, training methodology, testing results, performance metrics, and known limitations. This documentation must be maintained and updated throughout the system's lifecycle. It is reviewed during conformity assessments and may be requested by national supervisory authorities.

Conformity assessment is required before placing a high-risk system on the market. For most Annex III systems, providers can conduct self-assessment using the harmonized standards being developed under the Act. For AI systems used in certain high-risk biometric or law enforcement applications, third-party conformity assessment is required.

Post-market monitoring is mandatory: providers must implement a plan to collect and review data on system performance after deployment, report serious incidents and malfunctions to relevant national authorities, and take corrective action when required. High-risk systems must also be registered in the EU AI database before market placement.

Deployer obligations and the supply chain

The EU AI Act places obligations on both providers (those who develop or place AI on the market) and deployers (those who use AI in their operations). Deployers of high-risk AI must ensure they use the system as instructed, assign human oversight to qualified individuals, monitor operation and report issues to the provider, and conduct data protection impact assessments where required.

Organizations that deploy third-party high-risk AI systems cannot outsource compliance by pointing to their vendor. You must verify that the vendor's system meets EU AI Act requirements and that your deployment context is consistent with the intended use. Vendor contracts should include representations of compliance and notification obligations for material changes.