AI in Government and Public Sector
Government agencies using AI for benefits administration, law enforcement, border control, welfare eligibility, and public service delivery face some of the most stringent governance obligations globally. The EU AI Act prohibits several government AI applications outright and classifies others as high-risk with mandatory conformity requirements. Public sector AI also carries heightened civil rights, transparency, and accountability expectations that exceed those applied to comparable private sector deployments.
Key board-level questions
- 1.Which of our AI systems are classified as high-risk or prohibited under applicable law, and what is our compliance timeline?
- 2.Do affected individuals have meaningful recourse when AI systems make or inform decisions about them?
- 3.How do we document and publish AI systems in line with government transparency and accountability obligations?
- 4.Are our AI vendors under contractual obligations consistent with public sector data protection and procurement requirements?
Regulatory frameworks
EU AI Act: AI Literacy and Prohibited AI Systems Provisions (Applicable 2 February 2026)
The EU AI Act's first major compliance deadline takes effect on 2 February 2026, requiring all organizations that develop or deploy AI within the EU to establish AI literacy measures for their workforce. As of this date, the Act's prohibitions on AI systems deemed to pose unacceptable risks also become enforceable. Organizations must have ceased operation of any prohibited AI practices and demonstrated adequate staff competency with AI systems by this date.
Executive Order 14179: Removing Barriers to American Leadership in Artificial Intelligence
Executive Order 14179 directs federal agencies to develop a new national AI Action Plan prioritizing US dominance in AI development and deployment. It applies to federal agencies and shapes the regulatory environment for private-sector AI developers and deployers operating in the United States. The order explicitly revokes prior AI-related executive orders focused on safety guardrails, reorienting federal AI policy toward deregulation and competitiveness.
NIST Artificial Intelligence Risk Management Framework Playbook
Voluntary, use-case-agnostic operational companion to the NIST AI Risk Management Framework (AI RMF 1.0) that provides structured, actionable guidance, suggested actions, and example outputs for implementing the four core AI RMF functions-GOVERN, MAP, MEASURE, and MANAGE-across the AI system lifecycle.
OWASP Top 10 for Large Language Model Applications
The OWASP Top 10 for Large Language Model Applications identifies the ten most critical security risks in LLM-powered systems, including prompt injection, insecure output handling, training data poisoning, model denial of service, and supply chain vulnerabilities. It is the most widely referenced security framework for AI applications and is used by development and security teams globally to prioritize controls.
Playbook guidance
What does meaningful human oversight look like for high-risk AI decisions?
How do we detect and mitigate algorithmic bias?
What are our obligations under emerging AI regulations?
What does audit-ready AI documentation look like in practice?
How do we ensure human-in-the-loop review is actually effective?
