AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← News
ResearchGlobal2026-05-06

Seattle University Law Review Article Identifies Three Structural Gaps Where AI Undermines Existing Corporate Governance Frameworks

The AI, New Technologies, and Corporate Governance: Three Phenomena article, published in Volume 47, Issue 5 of the Seattle University Law Review, presents a legal-structural analysis of how AI deployment is outpacing the assumptions embedded in corporate governance doctrine. The article focuses on three discrete phenomena that collectively destabilize traditional notions of firm accountability: first, the erosion of firm boundaries as organizations increasingly rely on externally provided AI services rather than internally developed systems; second, the emergence of strategic access arrangements in which enterprises use AI capabilities without owning the underlying assets or models; and third, the hybrid nature of online platforms that simultaneously operate as infrastructure providers and competitive market actors. The authors contend that each phenomenon creates accountability gaps that neither corporate law nor existing sector-specific regulatory frameworks adequately address. No single jurisdiction is the focus of the analysis, as the article frames these as global structural conditions.

The article enters a crowded but still-developing conversation about the limits of existing regulatory architecture when applied to AI-driven enterprises. The problem it addresses is not new in substance but is growing in practical urgency: as enterprise AI deployment increasingly relies on third-party foundation models, cloud-hosted inference services, and platform intermediaries, the legal and governance structures designed around ownership, control, and clear firm boundaries become progressively less effective. This structural mismatch is reflected across multiple active regulatory efforts, including the EU AI Act's attempts to assign liability along supply chains and the emerging body of third-party AI risk guidance from financial regulators. The article's framing of platforms as both facilitators and market participants is directly relevant to ongoing discussions under the EU Digital Services Act and proposed frameworks such as the EU Digital Networks Act, which are grappling with similar dual-role conflicts. The scholarship adds a corporate governance lens to what has predominantly been a public law and consumer protection conversation.

For enterprise compliance teams, the article surfaces several concrete risk areas that warrant immediate attention. Legal and governance functions should audit which AI capabilities their organizations access through third-party services rather than own, and assess whether existing vendor contracts, liability clauses, and oversight mechanisms are sufficient given the accountability gaps the article describes. Risk teams at organizations that rely on platform intermediaries for AI services should examine whether those platforms' dual market roles create conflicts of interest that could affect service reliability, pricing, or data handling, and whether those risks are reflected in enterprise risk registers. Board-level governance committees and general counsel offices should consider whether their current frameworks for AI oversight assume a model of internal AI development that no longer matches operational reality, and whether updates to governance charters, audit committee mandates, or third-party risk policies are warranted. Organizations operating across multiple jurisdictions should track how legislative developments, particularly in the EU and at the U.S. federal level, are beginning to codify supply chain accountability in ways that will create affirmative compliance obligations tied directly to the structural conditions this article analyzes.

corporate governancethird-party AI riskplatform regulationAI governancesupply chain accountability