Daily Intelligence
AI Governance News
Regulations, enforcement actions, research, and opportunities — tracked daily.
Anthropic Releases Claude Opus 4.7 with Documented Cyber Capability Reductions and Safety Evaluations
Anthropic has released Claude Opus 4.7, a general-availability model focused on advanced software engineering tasks including complex long-running workflows, precise instruction following, and self-verification. The release includes documented safety evaluations and a deliberate reduction in cyber capabilities compared to the earlier Mythos Preview model, with Anthropic stating those safeguards were tested on less capable models before deployment. Anthropic has publicly disclosed these capability constraints as part of its corporate safety policy, specifically targeting high-risk application areas such as cybersecurity. For enterprise compliance teams, the release is notable because it demonstrates a voluntary, documented model-level risk mitigation practice that aligns with emerging expectations under frameworks such as the EU AI Act and NIST AI RMF for transparency and pre-deployment safety assessment. Organizations deploying Claude Opus 4.7 in security-sensitive or software development contexts should review Anthropic's published safety evaluations to support their own internal risk documentation and vendor due diligence obligations.
Anthropic, Google, Microsoft, and OpenAI Launch Frontier Model Forum to Set AI Safety Standards
Anthropic, Google, Microsoft, and OpenAI have jointly established the Frontier Model Forum, an industry body dedicated to advancing safety and responsibility in the development of frontier AI models. The forum will focus on producing technical evaluations, safety benchmarks, and shared best practices drawn from member expertise. Its formation follows voluntary AI safety commitments announced by the White House, which were signed by seven major technology companies including Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI. For enterprise compliance teams, the forum signals a growing industry-led standard-setting process that may shape expectations around model evaluation, documentation, and risk disclosure ahead of formal regulatory requirements. Organizations deploying or procuring frontier models should monitor outputs from the forum, as its benchmarks and best practices could be adopted as reference points by regulators and auditors. The voluntary commitment framework also represents a precedent for government-industry coordination on AI safety obligations.
Anthropic Restricts Deployment of Claude Mythos Preview Citing Offensive Cybersecurity Capabilities
Anthropic has announced Claude Mythos Preview, a frontier general-purpose language model that demonstrates advanced performance on cybersecurity tasks including vulnerability discovery and exploitation. The model's capabilities prompted Anthropic to restrict its deployment and launch Project Glasswing, an initiative aimed at securing critical software in anticipation of advanced AI-enabled threats. Due to benchmark saturation, Anthropic supplemented standard evaluations with novel real-world security tasks to assess the model's risk profile before any broader release. The decision to limit availability based on cybersecurity risk signals a maturing approach to capability-based deployment controls at the frontier model level. For enterprise compliance and risk teams, the case illustrates how AI developers may unilaterally restrict access to high-capability models and establishes a precedent for treating offensive security capability as a gating criterion for deployment. Organizations relying on Anthropic models or building AI-enabled security tooling should monitor access policy changes and assess supply chain dependencies accordingly.
Anthropic Restricts Claude Mythos Preview Rollout Citing Cybersecurity and Safety Risks
Anthropic has announced a restricted deployment of its Claude Mythos Preview model, citing cybersecurity concerns and safety risks that the company determined made a full public release inadvisable. The model achieved a GPQA score of 0.9, placing it among the highest-performing frontier models assessed to date, but Anthropic concluded that its capabilities posed unacceptable risks without further safeguards. Anthropic described this as its first such voluntary deployment restriction since the limited release of OpenAI's GPT-2 in 2019 set a precedent for staged rollouts of high-capability systems. For enterprise compliance teams, this decision signals that leading AI developers are applying internal capability thresholds that can result in models being withheld from commercial availability regardless of market demand. Organizations building procurement and vendor risk frameworks should account for the possibility that frontier model access may be contingent on developer-imposed safety determinations, not only regulatory requirements. Compliance professionals monitoring AI supply chain risk and third-party vendor policies should track how Anthropic communicates safety criteria and any conditions under which broader access may be granted.
Anthropic Restricts Claude Mythos Preview Rollout Over Cybersecurity Safety Concerns
Anthropic has applied deployment restrictions to Claude Mythos Preview, a model in its Claude series with advanced reasoning capabilities comparable to the Opus and Sonnet lines, citing cybersecurity safety concerns identified during red-teaming evaluations. The restricted rollout reflects a deliberate governance decision to limit access before broader release, following internal safety testing that flagged potential cybersecurity risks associated with the model's capabilities. For enterprise compliance teams, this action signals that leading AI developers are operationalizing pre-deployment safety gates that can delay or constrain commercial availability of frontier models. Organizations that have integrated or planned to integrate Claude-series models into workflows should assess vendor communication channels to understand which model versions are accessible and under what conditions. The restriction also underscores the growing importance of supplier-side AI governance disclosures as part of third-party risk management programs.
Harvard Law Study Finds Only One-Third of S&P 100 Companies Disclose Both Board AI Oversight Structures and Formal AI Policies
A Harvard Law School analysis of 2025 proxy statements from S&P 100 companies found that 54% disclose board-level AI oversight, but only one-third disclose both oversight structures and formal AI policies, revealing uneven governance practices across large US public companies. Of companies that do disclose board oversight, 63% assign responsibility to specific committees rather than the full board. The research also documents that US institutional investors are increasing expectations for formalized AI governance, with 46% favoring board or committee-based oversight mechanisms. For enterprise compliance teams, the findings establish a de facto market benchmark: companies lacking both a documented oversight structure and a formal AI policy are increasingly out of step with investor expectations and peer disclosure norms. Compliance and governance officers at public companies should assess current proxy disclosures against these emerging standards, particularly as the SEC and institutional shareholders intensify scrutiny of AI risk management disclosures.
Anthropic Removes Founding Safety Pause Commitment in Responsible Scaling Policy Version 3.0
Anthropic released version 3.0 of its Responsible Scaling Policy (RSP) in February 2026, eliminating the company's original commitment to pause AI development if safety could not be guaranteed in advance. The safety pause provision had been a defining feature of Anthropic's voluntary governance framework since the company introduced the RSP in 2023. The removal marks a material shift in how Anthropic's self-imposed development constraints are structured, moving away from a precautionary halt mechanism toward an updated framework whose specific replacement controls have not been fully detailed in public reporting. For enterprise compliance teams, this change is relevant to vendor risk assessments and third-party AI governance reviews, as Anthropic's RSP has been cited by organizations as evidence of supplier-level safety commitments when procuring or integrating Claude-based products. Compliance teams that reference Anthropic's published governance commitments in internal risk documentation, procurement due diligence, or regulatory disclosures should review whether those references remain accurate under the new policy version.
NACD Issues Board-Level Guidance on AI Governance Structures and Emerging Executive Roles
The National Association of Corporate Directors (NACD) has published governance guidance urging U.S. company boards to refine their oversight structures to address the specific risks posed by AI adoption, including deepfakes, data leakage, and algorithmic bias. The guidance frames AI governance as a distinct discipline from conventional IT governance, given that AI systems are probabilistic and require continuous monitoring rather than one-time validation. NACD also forecasts that roles such as Chief Data Officer and Chief AI Officer will become standard components of corporate leadership by 2025, signaling an expectation of dedicated executive accountability for AI risk. For enterprise compliance teams, the guidance reinforces that board-level AI oversight is increasingly viewed as a governance baseline, not an optional enhancement. Compliance officers should anticipate requests from boards for structured AI risk reporting frameworks and clear accountability mapping across AI-related functions.
UK Renames AI Safety Institute as AI Security Institute, Shifting Focus to National Security Risks
The UK government rebranded its AI Safety Institute as the AI Security Institute in February 2025, signaling a strategic reorientation toward national security threats and AI misuse risks. The institution, originally established following the Bletchley Park AI Safety Summit in November 2023, retains its research and evaluation functions but will now prioritize risks tied to hostile state actors, critical infrastructure threats, and the potential weaponization of AI systems. The rebrand reflects an evolving posture within UK AI governance, moving from a broad safety mandate toward a harder-edged security focus. For enterprise compliance teams operating in or with the UK, this shift indicates that future government guidance, evaluations, and enforcement priorities may increasingly address security-specific AI risks rather than general safety or ethics concerns. Organizations in sectors such as defense, critical national infrastructure, and financial services should monitor how this institutional pivot influences upcoming UK AI policy instruments and procurement requirements.
NACD Calls on Boards to Overhaul AI Oversight Structures Amid Deepfake and Data Leak Risks
The National Association of Corporate Directors (NACD) published guidance in January 2025 urging U.S. corporate boards to refine existing oversight mechanisms to address AI-specific governance failures. The guidance cites real-world incidents involving AI-generated deepfakes, confidential data leaks, and algorithmic bias as evidence that current board structures are inadequate for AI risk. NACD identifies a cross-functional leadership model as central to effective AI governance, placing the Chief AI Officer in coordination with the Chief Risk Officer, Chief Compliance Officer, Chief Legal Officer, and Chief Data Officer. For enterprise compliance teams, the guidance signals growing boardroom pressure to formalize AI accountability chains and integrate AI risk into existing enterprise risk management frameworks. Compliance professionals should expect boards to request clearer reporting lines, defined AI risk tolerances, and documented incident response protocols as standard governance requirements.
NACD 2025 Governance Outlook Calls on Boards to Strengthen AI Oversight as Incidents Rise
The National Association of Corporate Directors (NACD) has published its 2025 Governance Outlook, urging corporate boards in the United States to adapt oversight structures for AI adoption in response to a measurable rise in AI-related incidents. According to the AI Incident Database, AI incidents increased 26% between 2022 and 2023, with a further increase exceeding 32% in 2024. The guidance identifies hallucinations, bias, and data privacy failures as primary risk areas and calls for tuned governance frameworks and updated board reporting structures to address them. While non-binding, the guidance signals growing director-level accountability expectations that enterprise compliance and risk teams should factor into internal AI governance programs. Compliance professionals should note that board-level engagement on AI risk is increasingly treated as a baseline governance expectation, with implications for how responsible AI policies are documented, escalated, and reported to senior leadership.
22 Chinese AI Companies Sign Voluntary Security and Safety Commitments Under China AI Industry Alliance
In December 2024, the China AI Industry Alliance coordinated the signing of AI Security and Safety Commitments by 17 major Chinese technology companies, a group that has since grown to 22 signatories. Participants include Alibaba, Baidu, ByteDance, Huawei, and Tencent. The commitments establish a voluntary framework for managing AI risks across development and deployment, with 18 companies having disclosed their AI security and safety practices publicly. While non-binding, the initiative signals that Chinese regulators and industry bodies are using voluntary disclosure mechanisms alongside China's existing binding AI regulations, creating a layered compliance environment that multinational enterprises operating in or partnering with Chinese AI companies should monitor.
NTIA Releases AI Accountability Policy Report Recommending Mandatory Audits, Disclosures, and Red Teaming Standards
The National Telecommunications and Information Administration (NTIA) published its AI Accountability Policy Report in March 2024, setting out U.S. government recommendations to strengthen oversight of artificial intelligence systems. The report calls for mandatory AI audits, public disclosures, and liability rules, and advocates federal investment in tools, standards, and research supporting AI testing, evaluation, and red teaming. NTIA also recommends amending existing regulations to require these practices across sectors, signaling a potential shift toward binding accountability mechanisms at the federal level. Although the report is non-binding, it represents an authoritative statement of policy direction that enterprise compliance teams should track as a precursor to formal rulemaking. Organizations operating AI systems in U.S. markets should use the report's framework to benchmark their current audit, disclosure, and testing practices against emerging federal expectations.
