Governing with Artificial Intelligence
OECD-GovAI · Organisation for Economic Co-operation and Development
This OECD report examines how governments are deploying artificial intelligence in public sector operations, drawing on analysis of real-world use cases across member and partner countries. It finds that 57% of documented government AI applications focus on streamlining public services and 45% on improving administrative decision-making. The report also identifies governance risks including biased training data, insufficient transparency, and institutional overreliance on automated outputs.
Overview
Published in June 2025, this OECD publication provides a cross-country analytical framework for understanding and governing the use of AI within government institutions. It surveys documented AI deployments across public administrations, categorizing applications by function and assessing their governance maturity. Key provisions address the structural risks that arise when governments adopt AI without adequate oversight mechanisms, including data quality failures, opacity in algorithmic decisions, and erosion of human accountability. The report articulates recommended governance practices covering procurement, oversight, transparency obligations, and risk management for public sector AI systems. While not a binding instrument, it is intended to inform national policy development and complement the OECD AI Principles already adopted by over 40 countries. It serves as a reference document for governments designing or updating AI governance structures for public administration.
Key Requirements
- •Governments should establish clear accountability structures for AI-assisted decisions, ensuring a responsible human official can be identified for each automated output.
- •Public sector AI deployments should be subject to documented risk assessments prior to implementation, with particular scrutiny of data representativeness and potential for discriminatory outcomes.
- •Transparency mechanisms must be in place so that individuals affected by government AI decisions can understand the basis for those decisions and exercise rights of redress.
- •Institutions should implement monitoring processes to detect and address overreliance on AI outputs, particularly in high-stakes administrative contexts such as benefits determination or law enforcement support.
- •Procurement and deployment of AI tools by public bodies should align with national AI strategies and existing legal frameworks governing data protection and administrative law.
- •Governments are encouraged to report publicly on AI use cases, governance arrangements, and performance outcomes to support accountability and cross-country learning.
Who It Affects
Effective Date
2025-06-01
