AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

VoluntaryFrameworkISO/OECD/UNHigh riskLimited risk

Governing with Artificial Intelligence

Issued by

Organisation for Economic Co-operation and Development

liveEffective 2025-06-01OECD-GovAIVerified April 2026
Official document →

This OECD report examines how governments are deploying artificial intelligence in public sector operations, drawing on analysis of real-world use cases across member and partner countries. It finds that 57% of documented government AI applications focus on streamlining public services and 45% on improving administrative decision-making. The report also identifies governance risks including biased training data, insufficient transparency, and institutional overreliance on automated outputs.

Applies To

Public sectorAI deployerLarge enterprise

Overview

Published in June 2025, this OECD publication provides a cross-country analytical framework for understanding and governing the use of AI within government institutions. It surveys documented AI deployments across public administrations, categorizing applications by function and assessing their governance maturity. Key provisions address the structural risks that arise when governments adopt AI without adequate oversight mechanisms, including data quality failures, opacity in algorithmic decisions, and erosion of human accountability. The report articulates recommended governance practices covering procurement, oversight, transparency obligations, and risk management for public sector AI systems. While not a binding instrument, it is intended to inform national policy development and complement the OECD AI Principles already adopted by over 40 countries. It serves as a reference document for governments designing or updating AI governance structures for public administration.

Key Requirements

  • Governments should establish clear accountability structures for AI-assisted decisions, ensuring a responsible human official can be identified for each automated output.
  • Public sector AI deployments should be subject to documented risk assessments prior to implementation, with particular scrutiny of data representativeness and potential for discriminatory outcomes.
  • Transparency mechanisms must be in place so that individuals affected by government AI decisions can understand the basis for those decisions and exercise rights of redress.
  • Institutions should implement monitoring processes to detect and address overreliance on AI outputs, particularly in high-stakes administrative contexts such as benefits determination or law enforcement support.
  • Procurement and deployment of AI tools by public bodies should align with national AI strategies and existing legal frameworks governing data protection and administrative law.
  • Governments are encouraged to report publicly on AI use cases, governance arrangements, and performance outcomes to support accountability and cross-country learning.

What Your Organization Must Do

  • Assign a named human official as accountable owner for each AI-assisted decision process in your agency before deploying or continuing any automated output that affects individuals, effective no later than your next procurement or renewal cycle.
  • Conduct and document a risk assessment for every active and planned public sector AI deployment, focusing specifically on training data representativeness and discriminatory outcome potential, prior to go-live or at next scheduled review.
  • Build and publish a transparency mechanism for each AI system that affects citizens, covering the logic used, the data inputs, and the process for individuals to seek explanation or redress, aligned with existing administrative law obligations.
  • Assign a compliance or audit team to run periodic monitoring checks for overreliance on AI outputs in high-stakes contexts such as benefits determination, licensing, or law enforcement support, with findings reported to senior leadership at least annually.
  • Review all AI procurement contracts against your national AI strategy and applicable data protection law, flagging any gaps to legal counsel and updating contract terms at next renewal to incorporate governance and accountability requirements.
  • Prepare and publish an annual public register of government AI use cases, governance arrangements, and performance outcomes to satisfy the transparency expectations outlined in this framework and support cross-agency accountability.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Is the OECD Governing with Artificial Intelligence report legally binding on member countries?
No. This is a non-binding analytical framework and reference document. It is intended to inform national policy development and complement the OECD AI Principles, but it does not impose enforceable legal obligations on governments or public institutions.
Which countries are expected to align their public sector AI governance with this OECD framework?
The framework targets OECD member and partner countries, which collectively include over 40 nations that have already adopted the OECD AI Principles. Governments designing or updating AI governance structures for public administration are the primary intended audience.
What accountability requirement does the OECD framework set for AI-assisted government decisions?
The framework requires that a named, responsible human official can be identified for each automated output affecting individuals. Agencies should assign this accountability role no later than their next procurement or contract renewal cycle.
How does this OECD report define the transparency obligations for government AI systems?
Agencies must implement mechanisms that allow affected individuals to understand the logic, data inputs, and basis of AI-assisted decisions, and to seek explanation or redress. These mechanisms should align with existing administrative law and data protection obligations in each jurisdiction.
What governance risks does the OECD framework specifically flag for high-stakes public sector AI use cases?
The report highlights biased or unrepresentative training data, insufficient algorithmic transparency, and institutional overreliance on automated outputs. High-stakes contexts such as benefits determination, licensing, and law enforcement support receive particular scrutiny.
How does the OECD Governing with Artificial Intelligence framework differ from the EU AI Act for public sector compliance officers?
Unlike the EU AI Act, which is a binding regulation with enforceable penalties, this OECD framework is a voluntary reference document with no fines or sanctions. It is best used to benchmark governance maturity and shape procurement policy rather than as a compliance checklist with legal consequences.