AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Must ComplyRegulationEUUnacceptable riskHigh riskLimited riskMinimal risk

EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)

Issued by

European Commission

liveEffective 2026-02-02EUAIA-Feb26Verified April 2026

The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.

Applies To

Large enterpriseSMBPublic sectorAI developerAI deployer

Overview

The EU Artificial Intelligence Act (Regulation (EU) 2024/1689) entered into force on 1 August 2024 and applies in phased stages. The 2 February 2026 milestone activates two distinct obligation clusters simultaneously. First, Article 4 requires all providers and deployers of AI systems to take reasonable steps to ensure sufficient AI literacy among personnel involved in AI operation and oversight, proportionate to their roles and the risks involved. Second, Chapter II prohibitions on AI systems presenting unacceptable risks become fully enforceable, covering practices such as social scoring by public authorities, real-time remote biometric identification in public spaces (with limited exceptions), subliminal manipulation, and exploitation of vulnerable groups. National competent authorities designated by each EU Member State are responsible for supervision and enforcement of these provisions. Non-compliance with prohibited AI provisions can result in administrative fines of up to EUR 35 million or 7% of total worldwide annual turnover, whichever is higher.

Key Requirements

  • Cease all operation of AI systems falling under the unacceptable risk prohibitions in Chapter II of the EU AI Act by 2 February 2026, including social scoring systems, prohibited biometric categorization, and subliminal manipulation tools.
  • Implement documented AI literacy programs under Article 4 for all staff involved in operating, overseeing, or procuring AI systems, calibrated to role and risk level, with no prescribed minimum training hours but proportionality expected.
  • Maintain records demonstrating AI literacy measures have been taken, as national competent authorities may request evidence of compliance.
  • Penalties for deploying prohibited AI systems: fines up to EUR 35 million or 7% of global annual turnover, whichever is higher.
  • Organizations using general-purpose AI tools in ways that constitute a prohibited practice remain subject to enforcement regardless of whether the AI system itself was developed in-house or procured from a third party.
  • Member States must have designated national competent authorities capable of supervising these provisions by this date.

What Your Organization Must Do

  • Conduct a full inventory of all AI systems in use across the organization by December 2025, mapping each system against the Chapter II prohibited practices list (social scoring, real-time remote biometric identification, subliminal manipulation, exploitation of vulnerable groups) to identify any systems requiring shutdown before 2 February 2026.
  • Formally decommission any AI systems or practices identified as prohibited, documenting the shutdown process with timestamps, responsible business unit owners, and sign-off from the Chief Compliance Officer or General Counsel, no later than 31 January 2026.
  • Assign a project owner in HR or L&D to design and deploy a role-tiered AI literacy program under Article 4 by 2 February 2026, covering all staff who operate, oversee, procure, or make decisions informed by AI systems, with content calibrated to role-specific risk exposure.
  • Build a compliance evidence repository by 2 February 2026 to store training completion records, competency assessments, and literacy program documentation, structured for rapid production to national competent authorities upon request.
  • Review all third-party AI vendor contracts to assess whether any procured tools are used in ways that constitute a prohibited practice, escalating findings to Legal and issuing termination or modification notices to vendors as needed before the deadline.
  • Identify and monitor the designated national competent authority in each EU Member State where the organization operates, and establish an internal point of contact responsible for responding to regulatory inquiries and tracking enforcement guidance as it emerges post-February 2026.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

What exactly must organizations do before 2 February 2026 under the EU AI Act?
Two obligations activate simultaneously: organizations must have fully ceased operating any AI systems that fall under the Chapter II unacceptable risk prohibitions, and must have implemented documented AI literacy measures under Article 4 for all staff involved in operating, overseeing, or procuring AI systems.
Which specific AI practices are banned as of 2 February 2026?
Prohibited practices include social scoring by public authorities, real-time remote biometric identification in public spaces (with narrow exceptions), subliminal manipulation techniques, and AI systems that exploit vulnerable groups. Any organization still running these practices after 2 February 2026 is exposed to enforcement action.
How many training hours are required to satisfy the Article 4 AI literacy obligation?
The EU AI Act prescribes no minimum training hours. Article 4 requires a proportionate approach calibrated to each employee's role and the level of AI risk they are exposed to. Regulators will expect documented evidence that a structured, role-tiered literacy program exists and has been implemented.
What are the fines for deploying a prohibited AI system after 2 February 2026?
Administrative fines can reach EUR 35 million or 7% of total worldwide annual turnover, whichever is higher. These are the highest fine tier in the EU AI Act, reflecting the severity with which the regulation treats unacceptable-risk AI practices.
Does the prohibition apply if the banned AI system was purchased from a third-party vendor rather than built in-house?
Yes. Organizations that deploy a third-party AI tool in a way that constitutes a prohibited practice remain fully subject to enforcement, regardless of who developed the system. Procurement contracts should be reviewed and vendors notified well before the February 2026 deadline if prohibited use cases are identified.
Who enforces the 2 February 2026 provisions and how should organizations prepare for oversight?
Enforcement is handled by national competent authorities designated by each EU Member State. Organizations operating across multiple Member States should identify the relevant authority in each jurisdiction, maintain a compliance evidence repository with training records and system decommissioning documentation, and designate an internal point of contact for regulatory inquiries.