Bletchley Declaration on AI Safety
Issued by
Twenty-eight signatory governments including the United States, United Kingdom, European Union, China, Australia, Canada, France, Germany, India, Japan, Republic of Korea, and Saudi Arabia; convened at the UK AI Safety Summit, Bletchley Park
A landmark multilateral political declaration signed at the inaugural AI Safety Summit, in which 28 governments agreed on the existence of frontier AI risks and committed to collaborative international action on AI safety, evaluation, and information sharing.
Applies To
Overview
The Bletchley Declaration was issued on November 1–2, 2023, at the United Kingdom's inaugural AI Safety Summit held at Bletchley Park, Bletchley, England. The Summit convened representatives from 28 nations, notably including both the United States and the People's Republic of China, alongside representatives of the European Union, major AI developers, and civil society organizations. The Declaration is a non-binding political commitment and does not carry the force of domestic law or regulation in any signatory jurisdiction. However, it represents the broadest multilateral consensus to date on the nature of risks posed by frontier AI systems and the need for coordinated international governance. The Declaration acknowledges that the most significant near-term risks arise from 'frontier AI', defined as the most capable general-purpose AI models at the cutting edge of development, and identifies potential harms including risks to biosecurity, cybersecurity, critical infrastructure, and broader societal stability. It commits signatories to: (1) developing a shared understanding of frontier AI risks; (2) building national and international AI safety testing and evaluation capacity; (3) sharing information on identified risks; and (4) cooperating on appropriate policy responses. The Declaration launched the process that produced subsequent AI safety agreements, including the Seoul AI Safety Commitments (May 2024) and the International Network of AI Safety Institutes. For enterprise compliance teams, the Bletchley Declaration signals the political direction of travel for binding international AI safety regulation and informs the risk appetite and regulatory horizon of major jurisdictions.
Key Requirements
- •Not legally binding; no direct compliance obligations on enterprises
- •Signatories commit to developing shared scientific and technical understanding of frontier AI risks
- •Signatories commit to building national AI safety testing, evaluation, and red-teaming capabilities
- •Commitment to transparent sharing of identified risks among signatory governments
- •Support for international cooperation on AI safety research and policy development
- •Recognition that frontier AI risks may be 'potentially catastrophic' and require urgent multilateral attention
- •Commitment to inclusive international dialogue including emerging economies
- •Foundation for subsequent binding and non-binding instruments emerging from the AI Safety Summit process
What Your Organization Must Do
- →Monitor successor instruments emerging from the Bletchley Declaration process, specifically the Seoul AI Safety Commitments (May 2024) and outputs from the International Network of AI Safety Institutes, and brief the Chief AI Officer and General Counsel quarterly on binding obligations likely to follow.
- →Map your organization's AI portfolio against the Declaration's priority risk domains (biosecurity, cybersecurity, and critical infrastructure) and flag any frontier or general-purpose AI systems for enhanced risk review before the next board-level AI governance cycle.
- →Instruct your regulatory affairs team to track national AI safety institute activity in each signatory jurisdiction where you operate, as these bodies will likely conduct evaluations and red-teaming exercises that affect licensing or market-access conditions for frontier AI products.
- →Update enterprise AI risk appetite statements to reflect the multilateral consensus that frontier AI risks may be catastrophic, ensuring this language is incorporated into existing risk frameworks and reported to the risk committee no later than the next annual review.
- →Engage external counsel in key jurisdictions (US, UK, EU, and any other signatory markets where you operate) to assess how Bletchley-aligned domestic AI safety legislation is progressing and identify a 12-to-18-month horizon for anticipated binding obligations.
- →Establish an information-sharing protocol with industry peers or sector associations to receive early signals on frontier AI risk findings that governments begin disclosing under their transparency commitments, and assign a named compliance lead to maintain those relationships.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- Is the Bletchley Declaration legally binding on companies or governments?
- No. The Bletchley Declaration is a non-binding political commitment. It does not create enforceable obligations for signatory governments or direct compliance requirements for enterprises. Its significance lies in signaling multilateral political consensus that is expected to inform future binding domestic and international AI safety regulation.
- Which countries signed the Bletchley Declaration?
- Twenty-eight governments signed, including the United States, United Kingdom, China, European Union, Australia, Canada, France, Germany, India, Japan, South Korea, and Saudi Arabia. The inclusion of both the US and China was widely noted as a significant diplomatic achievement for the inaugural AI Safety Summit.
- What does the Bletchley Declaration define as frontier AI and why does it matter for compliance teams?
- The Declaration defines frontier AI as the most capable general-purpose AI models at the cutting edge of development. Compliance teams should note this framing because successor instruments, including the Seoul AI Safety Commitments, use similar scope definitions that may anchor future licensing or evaluation requirements for advanced AI products in signatory jurisdictions.
- What specific risk domains does the Bletchley Declaration identify as priority concerns?
- The Declaration highlights biosecurity, cybersecurity, critical infrastructure, and broader societal stability as priority risk domains for frontier AI. Organizations operating AI systems in any of these sectors should flag those systems for enhanced risk review, as national AI safety institutes are likely to focus early evaluation activity on these areas.
- How does the Bletchley Declaration relate to the Seoul AI Safety Commitments and the International Network of AI Safety Institutes?
- The Bletchley Declaration launched the process that produced both instruments. The Seoul AI Safety Commitments, issued in May 2024, built directly on Bletchley's framework, and the International Network of AI Safety Institutes operationalizes the Declaration's commitment to shared safety testing capacity. Compliance teams should treat these as a sequenced regulatory progression rather than isolated documents.
- Should a multinational enterprise update its AI risk appetite statement in response to the Bletchley Declaration?
- Yes. The Declaration reflects the broadest multilateral consensus to date that frontier AI risks may be catastrophic, and risk committees increasingly expect enterprise risk frameworks to align with geopolitical signals of this magnitude. Incorporating this language before the next annual risk review also positions the organization ahead of anticipated binding obligations in key signatory markets.
