AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← News

GSA Publishes AI Strategies and Compliance Plan, Establishing EDGE Board and AI Oversight Committee Under Updated CIO Directive 2185.1A

Source

AI strategies and compliance plan

General Services Administration (GSA)

The General Services Administration has released its AI Strategies and Compliance Plan, which formalizes a two-tier internal AI governance structure for the agency. At the executive level, the AI Governance Board, designated the EDGE Board, is co-chaired by the Chief Data Officer and the Deputy Administrator, providing senior-level accountability for AI strategy and policy decisions. Below that, a cross-functional AI Oversight Committee is responsible for evaluating all AI use requests submitted within the agency and for enforcing applicable privacy and security requirements before any AI system is approved for deployment. The plan is accompanied by an update to CIO Directive 2185.1A, which materially broadens the governance scope to encompass all AI systems, not only those based on generative AI models, signaling a more comprehensive risk posture across the agency's technology portfolio.

The publication reflects a broader federal push to institutionalize AI governance at the agency level following a series of executive directives and Office of Management and Budget guidance requiring federal entities to assign clear accountability for AI use and demonstrate compliance with applicable risk management standards. OMB Memorandum M-26-04 on unbiased AI principles and prior OMB guidance on responsible AI in government have placed explicit obligations on federal agencies to establish internal review mechanisms, designate senior AI officials, and document their AI inventories. GSA's approach of pairing a strategic board with an operational committee addresses a governance gap common across large organizations: the separation of policy-setting from day-to-day risk adjudication. The decision to extend CIO Directive 2185.1A beyond generative AI also responds to concerns that earlier frameworks were too narrowly scoped, leaving traditional machine learning systems, automated decision tools, and predictive analytics outside formal review processes.

For enterprise compliance teams, particularly those operating under federal contracts or managing AI products that interface with GSA procurement systems, this structure has immediate practical relevance. Organizations that supply AI-enabled services to GSA will need to understand how the EDGE Board and AI Oversight Committee evaluate vendor submissions, including what privacy impact documentation and security control evidence the Oversight Committee expects before approving AI system use. Internal compliance teams should also treat this governance model as a reference architecture: the combination of an executive co-chaired strategy board and a cross-functional review committee with enforceable controls represents a model that regulatory bodies and auditors may increasingly expect private sector organizations to mirror. Compliance officers should review whether their own AI intake and review processes are documented with comparable specificity, whether privacy and security controls are assessed before deployment rather than after, and whether senior leadership accountability for AI decisions is formally designated and auditable.

federal AI governanceoversight structuresrisk assessmentgovernment complianceAI policy