Harvard Ethics Center Introduces 'Boundaries of Tolerance' Framework as US AI Deregulation Shifts Risk to Private Sector
Source
Harvard EthicsThe Harvard Business School Ethics Center published AI Governance Crossroads: America's AI Action Plan and Its Impact on Businesses on November 1, 2025, examining how the US AI Action Plan restructures AI risk responsibility by moving it away from federal oversight and onto private sector organizations. The analysis argues that the current US deregulatory posture leaves companies without mature regulatory guardrails and increases the burden on internal governance functions. To address this gap, the research introduces the Boundaries of Tolerance Framework, a structured methodology designed to help organizations identify, document, and operationalize the AI-related risks they are prepared to accept across three dimensions: ethics, accountability, and governance.
The publication reflects a broader tension in US AI policy that has intensified throughout 2025. The AI Action Plan, which replaced the Biden-era executive order on AI safety, substantially reduced federal prescriptive requirements and emphasized innovation over regulatory constraint. The Harvard Ethics Center analysis responds directly to this shift, noting that the absence of binding federal rules does not eliminate organizational risk exposure. Instead, it transfers the task of defining acceptable risk thresholds from regulators to compliance and governance teams inside individual organizations, a dynamic that the framework is designed to make tractable.
For enterprise compliance teams, the immediate practical priority is to avoid treating the absence of federal mandates as permission to defer internal AI governance work. Organizations should use the Boundaries of Tolerance Framework as a starting point to formally document their risk tolerance positions across ethics, accountability, and governance dimensions before those positions are tested by an incident or a regulator in another jurisdiction. Multinational organizations face a compounding challenge: the US deregulatory stance must be reconciled with the EU AI Act, which imposes binding obligations on prohibited uses, high-risk systems, and transparency requirements with enforcement deadlines beginning in 2025 and extending through 2027. Compliance teams operating across both jurisdictions should map their AI inventories against both frameworks and identify where internal thresholds will need to meet or exceed the more prescriptive EU standard.
