AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

VoluntaryGuidelineUS

FTC AI Enforcement Policy

Issued by

Federal Trade Commission (FTC), United States

liveEffective 2021-01-01FTC AI EnforcementVerified April 2026
Official document →

The FTC's AI enforcement posture, articulated through policy statements, guidance documents, and enforcement actions, applies existing consumer protection and competition statutes to AI-related harms including deceptive AI claims, discriminatory automated decisions, and unfair data practices underpinning AI systems.

Applies To

US-based enterprises and foreign companies subject to FTC jurisdiction that developmarketor deploy AI systems affecting US consumers. Particularly high exposure for: consumer-facing technology companiesfinancial services firms using AI in credit or insurance decisionsemployers using AI in hiring or workforce managementhealthcare companies using AI in patient-facing applicationsand companies making public claims about AI product capabilities. Applies to both AI developers and deploying businesses.

Overview

The Federal Trade Commission does not administer a single codified AI statute. Instead, it has developed an AI enforcement posture through a series of authoritative policy statements, guidance publications, and enforcement actions that apply the FTC's existing statutory authorities, principally Section 5 of the FTC Act (prohibiting unfair or deceptive acts or practices), the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the Children's Online Privacy Protection Act (COPPA), to AI-related conduct. The FTC's AI enforcement framework crystallised through several key publications: 'Using Artificial Intelligence and Algorithms' (April 2020), 'Aiming for Truth, Fairness, and Equity in Your Company's Use of AI' (April 2021), 'Loot Boxes, AI, and Kids' (2022), and the comprehensive FTC report 'Protecting Privacy in an Era of Generative AI.' The Commission has also brought enforcement actions with explicit AI dimensions, including actions related to AI-powered facial recognition, algorithmic price-fixing, and deceptive claims about AI product capabilities. The FTC's AI enforcement priorities cluster around five areas: deceptive or unsubstantiated claims about AI capabilities; discriminatory outcomes from algorithmic systems in credit, employment, housing, and public accommodation; privacy violations in the collection, use, and retention of data that trains or informs AI systems; AI-enabled manipulation of consumers, particularly vulnerable populations; and anti-competitive conduct involving AI, including exclusive dealing arrangements around AI inputs such as compute, data, and talent. The FTC has also issued specific guidance warning against the use of AI to engage in deceptive synthetic media, impersonation, and fake review generation. Enforcement authority is supplemented at the federal level by sector-specific AI guidance from the CFPB, EEOC, and DOJ, with whom the FTC has issued joint statements on AI and algorithmic discrimination.

Key Requirements

  • Ensure all public claims about AI system capabilities, accuracy, and safety are truthful, substantiated, and not misleading under Section 5 of the FTC Act
  • Conduct pre-deployment bias and discrimination audits for AI systems used in credit, employment, housing, or public accommodation contexts to satisfy FCRA, ECOA, and analogous authorities
  • Maintain robust data minimisation and purpose limitation practices for personal data used to train or operate AI systems, consistent with FTC privacy guidance
  • Implement human review mechanisms for adverse action decisions driven by algorithmic systems, consistent with FCRA adverse action notice requirements
  • Do not use AI systems to generate fake reviews, fake endorsements, or synthetic testimonials in violation of FTC endorsement and testimonial guidance
  • Do not deploy AI-driven dark patterns or manipulative design to exploit consumer decision-making
  • Disclose material AI use to consumers where non-disclosure would constitute deception, including in customer service, content generation, and decision-support contexts
  • Assess and mitigate anti-competitive effects of AI-related exclusive dealing, data hoarding, or bundling practices
  • Implement vendor due diligence for third-party AI tools and datasets to avoid derivative liability for discriminatory or deceptive AI outputs
  • Retain records sufficient to demonstrate compliance with applicable consumer protection and anti-discrimination obligations in AI contexts

What Your Organization Must Do

  • Audit all marketing materials, product documentation, and public statements referencing AI capabilities against Section 5 substantiation standards; assign the Chief Marketing Officer and General Counsel to certify accuracy of AI claims on a quarterly basis.
  • Commission pre-deployment bias audits for any AI system used in credit, employment, housing, or public accommodation decisions, using independent third-party auditors where possible, and document findings before any system goes live.
  • Implement FCRA-compliant adverse action notice workflows for all algorithmically driven decisions, including a human review step that can override automated outputs and generate individualized notices within the statutory timeframe.
  • Establish a data governance policy that enforces data minimization and purpose limitation for all personal data used to train or operate AI systems; require the Chief Privacy Officer to sign off on new training dataset acquisitions before use.
  • Conduct vendor due diligence reviews for all third-party AI tools and datasets, including contractual representations on non-discrimination and accuracy, and schedule annual reassessments to identify inherited liability risks.
  • Prohibit use of AI to generate fake reviews, synthetic endorsements, or deceptive impersonations by issuing a written internal policy, distributing it to all product and marketing teams, and logging acknowledgment of receipt within 30 days.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Does the FTC AI Enforcement Policy apply to foreign companies selling AI products to US consumers?
Yes. The FTC's jurisdiction extends to foreign companies whose AI products or services affect US consumers. If your company markets or deploys AI systems that reach US consumers, FTC Act Section 5 and associated statutes such as the FCRA and ECOA can apply regardless of where you are incorporated.
What specific AI claims can trigger an FTC enforcement action under Section 5 of the FTC Act?
Any public claim about an AI system's accuracy, safety, fairness, or capabilities that is unsubstantiated or misleading can trigger Section 5 liability. This includes overstating a model's accuracy rates, falsely claiming bias audits were conducted, or misrepresenting how consumer data is used to train the system.
Does the FTC require companies to notify consumers of adverse decisions made by AI or algorithmic systems?
Yes, where the FCRA applies. Companies using algorithmic systems in credit, insurance, employment, or tenant screening must provide adverse action notices that identify the reasons for the decision. The FTC has signaled that human review mechanisms capable of overriding automated outputs are expected as part of a compliant workflow.
How does FTC AI enforcement differ from the EU AI Act?
The EU AI Act is a single codified statute with tiered risk classifications, mandatory conformity assessments, and a dedicated enforcement authority. FTC AI enforcement applies existing statutes such as the FTC Act and FCRA to AI conduct on a case-by-case basis, with no formal risk tier system and no standalone AI compliance certification requirement.
Can a company face FTC liability for discriminatory outputs produced by a third-party AI tool it did not build?
Yes. The FTC's guidance makes clear that deploying businesses bear responsibility for discriminatory or deceptive outcomes from third-party AI tools used in their products or decisions. Vendor due diligence, contractual protections, and ongoing monitoring of third-party AI outputs are all expected risk mitigation steps.
What penalties can the FTC impose for AI-related violations?
The FTC can seek civil penalties of up to $51,744 per violation per day for certain statutory violations, injunctive relief, and consumer redress through federal court. In practice, FTC AI-related settlements have included consent orders requiring algorithmic audits, deletion of improperly collected data, and restrictions on future AI product deployments.