AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

← News
ResearchGlobal2026-05-11

Future of Life Institute Publishes 2025 AI Safety Index, Rating Seven Frontier AI Companies Across 33 Indicators

Source

2025 AI Safety Index - Summer 2025

Future of Life Institute

The Future of Life Institute published the 2025 AI Safety Index - Summer 2025 in May 2025, providing a structured evaluation of seven leading AI companies against 33 measurable indicators organized across six domains: risk ownership, accountability, independent oversight, safety culture, responsible deployment, and transparency. The index names specific companies and assigns performance assessments at the indicator level, making it one of the most granular third-party benchmarking exercises applied to frontier AI developers to date. Among the concrete findings, the report identifies coordination weaknesses at Google DeepMind, calls out insufficient disclosure practices around third-party model evaluations, and flags the absence of published whistleblowing policies as a systemic gap across multiple assessed organizations. The index covers companies operating across multiple jurisdictions and is framed as a global assessment rather than one tied to any single regulatory regime.

The publication reflects a broader pattern in AI governance in which civil society organizations and research institutions are filling measurement and accountability gaps that formal regulatory frameworks have not yet addressed. Regulators in the EU, UK, and US have moved toward requiring safety testing, impact assessments, and transparency disclosures for frontier models, but enforceable standards for evaluating corporate safety culture and internal oversight structures remain nascent. The AI Safety Index methodology draws on domains that overlap significantly with requirements emerging from the EU AI Act, the G7 Hiroshima AI Code of Conduct, and the Bletchley Declaration on AI Safety, positioning it as a de facto reference point for companies seeking to demonstrate alignment with multiple frameworks simultaneously. The specific attention to whistleblowing policies and third-party evaluation transparency also connects to ongoing regulatory discussions in the UK and EU about independent auditing and incident reporting obligations for high-capability AI systems.

Compliance and risk teams at enterprises procuring or deploying frontier AI models should treat the index as an input to vendor due diligence processes, particularly when assessing the safety culture and oversight maturity of model providers. Organizations subject to the EU AI Act's provisions for high-risk AI systems or general-purpose AI models with systemic risk designations should map the index's six domains against their existing supplier assessment criteria, as the indicators on independent oversight and third-party evaluations are directly relevant to third-party provider obligations under that regulation. Legal and ethics teams should note that the explicit identification of absent whistleblowing policies may become a procurement risk factor as regulators in multiple jurisdictions increasingly treat internal reporting mechanisms as a baseline governance requirement. Companies building on models from assessed providers should also track whether those providers respond publicly to the index findings, as a pattern of non-response or remediation failure could become relevant to enterprise risk disclosures or board-level AI governance reporting.

AI safetyfrontier modelsresponsible AIthird-party evaluationcorporate accountability