aigovernance.com

Global AI Regulation & Framework Directory

← Directory

FTC AI Enforcement Policy

FTC AI Enforcement · Federal Trade Commission (FTC), United States

The FTC's AI enforcement posture, articulated through policy statements, guidance documents, and enforcement actions, applies existing consumer protection and competition statutes to AI-related harms including deceptive AI claims, discriminatory automated decisions, and unfair data practices underpinning AI systems.

Overview

The Federal Trade Commission does not administer a single codified AI statute. Instead, it has developed an AI enforcement posture through a series of authoritative policy statements, guidance publications, and enforcement actions that apply the FTC's existing statutory authorities, principally Section 5 of the FTC Act (prohibiting unfair or deceptive acts or practices), the Fair Credit Reporting Act (FCRA), the Equal Credit Opportunity Act (ECOA), and the Children's Online Privacy Protection Act (COPPA), to AI-related conduct. The FTC's AI enforcement framework crystallised through several key publications: 'Using Artificial Intelligence and Algorithms' (April 2020), 'Aiming for Truth, Fairness, and Equity in Your Company's Use of AI' (April 2021), 'Loot Boxes, AI, and Kids' (2022), and the comprehensive FTC report 'Protecting Privacy in an Era of Generative AI.' The Commission has also brought enforcement actions with explicit AI dimensions, including actions related to AI-powered facial recognition, algorithmic price-fixing, and deceptive claims about AI product capabilities. The FTC's AI enforcement priorities cluster around five areas: deceptive or unsubstantiated claims about AI capabilities; discriminatory outcomes from algorithmic systems in credit, employment, housing, and public accommodation; privacy violations in the collection, use, and retention of data that trains or informs AI systems; AI-enabled manipulation of consumers, particularly vulnerable populations; and anti-competitive conduct involving AI, including exclusive dealing arrangements around AI inputs such as compute, data, and talent. The FTC has also issued specific guidance warning against the use of AI to engage in deceptive synthetic media, impersonation, and fake review generation. Enforcement authority is supplemented at the federal level by sector-specific AI guidance from the CFPB, EEOC, and DOJ, with whom the FTC has issued joint statements on AI and algorithmic discrimination.

Key Requirements

  • Ensure all public claims about AI system capabilities, accuracy, and safety are truthful, substantiated, and not misleading under Section 5 of the FTC Act
  • Conduct pre-deployment bias and discrimination audits for AI systems used in credit, employment, housing, or public accommodation contexts to satisfy FCRA, ECOA, and analogous authorities
  • Maintain robust data minimisation and purpose limitation practices for personal data used to train or operate AI systems, consistent with FTC privacy guidance
  • Implement human review mechanisms for adverse action decisions driven by algorithmic systems, consistent with FCRA adverse action notice requirements
  • Do not use AI systems to generate fake reviews, fake endorsements, or synthetic testimonials in violation of FTC endorsement and testimonial guidance
  • Do not deploy AI-driven dark patterns or manipulative design to exploit consumer decision-making
  • Disclose material AI use to consumers where non-disclosure would constitute deception, including in customer service, content generation, and decision-support contexts
  • Assess and mitigate anti-competitive effects of AI-related exclusive dealing, data hoarding, or bundling practices
  • Implement vendor due diligence for third-party AI tools and datasets to avoid derivative liability for discriminatory or deceptive AI outputs
  • Retain records sufficient to demonstrate compliance with applicable consumer protection and anti-discrimination obligations in AI contexts

Effective Date

2021-01-01

Official source →