Harvard Law Review Warns Anthropic and OpenAI Governance Structures Risk Amoral Drift on AI Safety
Source
Harvard Law ReviewA January 2026 Amoral Drift in AI Corporate Governance article published in the Harvard Law Review examines the corporate governance arrangements of OpenAI and Anthropic, concluding that both structures carry meaningful risk of eroding AI safety commitments as commercial pressures grow. The analysis pays particular attention to Anthropic's charter, which grants safety-focused Class T trustees the authority to elect three of five board directors either after May 24, 2027, or once the company reaches $6 billion in cumulative investment. The authors argue that mechanisms designed to insulate safety objectives from profit motives are susceptible to gradual weakening over time, a process they label amoral drift.
The article reflects a broader concern among legal scholars and regulators that the novel hybrid structures adopted by frontier AI companies, including public benefit corporations, capped-profit models, and trustee-based oversight arrangements, have not been tested under the sustained financial pressures that accompany large-scale commercial deployment. OpenAI's well-publicized governance crisis in late 2023 and its subsequent structural changes toward a conventional for-profit model offered an early data point that these arrangements can shift rapidly. The Harvard Law Review analysis extends that concern by examining whether even more deliberately constructed safeguards, such as Anthropic's charter provisions, are durable in practice.
Enterprise compliance teams that procure AI systems from Anthropic or OpenAI should treat this research as a prompt for concrete vendor risk action. Organizations should document the current governance structures of these vendors as a baseline, noting the specific trigger conditions in Anthropic's charter, and establish a process for monitoring any amendments or board composition changes, particularly as the May 2027 date approaches or as Anthropic nears the $6 billion investment threshold. Procurement and third-party risk teams should avoid treating vendor-published safety commitments or responsible use policies as static assurances, and should instead build contractual provisions or periodic attestation requirements that surface material governance changes. Regulated industries with existing AI vendor oversight obligations, such as financial services firms subject to SR 11-7 or health sector entities under FDA AI guidance, face the greatest exposure if a vendor's safety governance weakens without triggering a formal review.
