Future of Life Institute Releases Summer 2025 AI Safety Index, Rating Seven Major AI Companies Across 33 Indicators
Source
Future of Life InstituteThe Future of Life Institute published its Summer 2025 AI Safety Index on July 15, 2025, evaluating seven leading AI companies across 33 indicators of responsible development organized into six domains, including risk ownership, accountability, and oversight. The index does not publicly identify all evaluated companies in its raw findings but singles out DeepMind with specific recommendations, including better coordination between safety and policy teams, greater transparency in third-party evaluations, and publication of risk assessments in model cards. The report's scope is global, making it relevant to compliance professionals operating across multiple jurisdictions.
The index was published against a backdrop of growing concern that AI developers are not translating stated safety commitments into verifiable operational practices. Regulators in the European Union, United Kingdom, and United States have all signaled interest in whether voluntary safety frameworks adopted by major AI companies correspond to measurable internal controls, and the index is designed to provide an independent, structured answer to that question. The Future of Life Institute's methodology across 33 discrete indicators reflects an industry-wide push to move AI governance assessment beyond high-level policy statements toward auditable, documented evidence of practice.
Enterprise compliance teams that procure AI systems or maintain vendor relationships with any of the seven evaluated companies should treat this index as a potential reference point for third-party due diligence assessments, investor inquiries, and regulatory reviews. The specific recommendations directed at DeepMind around model cards and third-party audit transparency signal areas where documentation standards may tighten across the industry, meaning procurement teams should review existing vendor contracts for provisions related to model documentation and audit rights. Compliance officers should also monitor whether future editions of the index, or regulatory bodies citing it, begin to establish de facto disclosure expectations that influence supplier qualification processes or AI system procurement criteria.
