AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← News
RegulatoryUS2026-05-05

Microsoft, Google DeepMind, and xAI Sign Pre-Deployment AI Model Review Agreements with U.S. Government

Microsoft, Google DeepMind, and xAI have each entered into formal agreements with the Center for AI Standards and Innovation (CAISI), a division of the National Institute of Standards and Technology within the U.S. Department of Commerce, granting government evaluators access to frontier AI models prior to public release for national security assessment. The agreements were announced on May 5, 2026, with CAISI Director Chris Fall describing independent measurement science as essential to understanding frontier AI's national security implications. Under the arrangements, participating companies provide model versions with safety guardrails partially or fully removed, allowing CAISI reviewers and the TRAINS Taskforce—a cross-agency body of AI national security experts—to probe capabilities and risk profiles without commercial deployment constraints. CAISI reports having completed more than 40 such evaluations to date, including assessments of models that have not yet been released to the public.

These new agreements expand a security review program that previously covered only Anthropic and OpenAI, both of which have renegotiated their CAISI arrangements to align with priorities in President Trump's AI Action Plan, issued in early 2025. The Action Plan directed the Commerce Department to pursue structured engagement with frontier AI developers as a mechanism for assessing national security risks arising from advanced AI capabilities—including large-scale cyberattack planning, autonomous weapons systems, and capabilities that could undermine human oversight of critical infrastructure. The extension of the program to three additional major developers reflects a broader policy consensus that pre-deployment government evaluation should become a standard feature of the U.S. frontier AI development landscape, rather than a voluntary initiative limited to a small number of firms. This approach mirrors elements of the EU AI Act's pre-deployment notification requirements for general-purpose AI models with systemic risk, though the U.S. program remains voluntary and does not carry equivalent enforcement mechanisms.

For enterprise compliance and legal teams, the expansion of CAISI's pre-deployment review program has several practical consequences for how organizations procure, deploy, and govern frontier AI systems. The program's rapid growth signals that government scrutiny of frontier AI capabilities is intensifying across security, defense, and critical infrastructure dimensions—and that developers are increasingly expected to participate in structured government evaluation as a condition of operating at scale in the U.S. market, even where participation remains formally voluntary. Enterprises that integrate frontier models from Microsoft, Google, or xAI into regulated or sensitive use cases should account for government model access arrangements in their third-party AI risk assessments and vendor due diligence processes, particularly where data handling and capability exposure are concerns. Governance and risk teams should also monitor whether the current voluntary framework evolves toward mandatory pre-deployment notification, which would align U.S. practice more closely with EU requirements and could materially affect procurement timelines, contractual obligations, and internal AI governance program design.

pre-deployment evaluationnational securityCAISINISTfrontier AIgovernment accessAI Action Plan

Related directory entries