Singapore Consensus on Global AI Safety Research Priorities
SG-AI-Safe · Government of Singapore and International Summit Participants
The Singapore Consensus is an international consensus document establishing a coordinated agenda for AI safety research priorities across jurisdictions and institutions. It emerged from a multilateral summit convened by the Singapore Government and reflects agreement among participating governments and organizations on where collaborative safety research efforts should be directed. The document does not impose binding legal obligations but provides a shared reference framework for national AI safety programs and research funding bodies.
Overview
Published in May 2025 following an international AI safety summit hosted by Singapore, the Singapore Consensus outlines agreed global priorities for AI safety research, intended to align efforts across governments, academic institutions, and industry participants. The document functions as a coordination instrument, analogous in structure and intent to the Bletchley Declaration of 2023, identifying thematic research areas deemed most critical to mitigating risks from advanced AI systems. Participating jurisdictions and organizations are expected to reference the consensus when designing national safety research programs, funding allocations, and international cooperation agreements. The framework does not establish a binding compliance regime, enforcement mechanism, or penalty structure, and adherence is voluntary. Its practical effect is to shape the priorities of national AI safety institutes, multilateral research partnerships, and standards bodies that subsequently translate research findings into technical standards or regulatory guidance. The Singapore Government, as convening authority, holds the secretariat function, though ongoing governance arrangements for the consensus had not been fully detailed at the time of publication.
Key Requirements
- •Participating governments and institutions are expected to align national AI safety research agendas with the consensus priority areas identified in the document
- •Signatories and endorsing organizations are encouraged to report on research activities and progress against the identified priority themes through existing international coordination channels
- •Research funding bodies are invited to reference the consensus when evaluating proposals for AI safety research grants and programs
- •International collaboration agreements between AI safety institutes should be informed by the shared priorities established in the consensus
- •No binding compliance deadlines, financial thresholds, or penalty provisions are established; the framework operates on a voluntary coordination basis
Who It Affects
Effective Date
2025-05-01
