AI Governance Institute logo
AI Governance Institute

Practical Governance for Enterprise AI

Must ComplyRegulationUS

California AI Transparency Act

Issued by

California State Legislature; administered by the California Attorney General

liveEffective 2026-01-01CAITAVerified April 2026
Official document →

California's AI Transparency Act (SB 942) requires developers of generative AI systems that reach defined usage thresholds to provide AI detection tools and disclosure mechanisms so users and consumers can identify AI-generated content, establishing baseline transparency obligations for covered AI providers operating in or targeting California.

Applies To

Developers and operators of large-scale generative AI systems—including textimageaudioand video generation platforms—that meet the one million monthly user threshold and whose systems are accessible to California residents. This includes major technology companiesAI platform providerssocial media companies with generative AI featuresand enterprise software vendors with embedded generative AI capabilities. Downstream enterprises deploying covered third-party AI systems should also assess whether their vendor agreements address CAITA compliance obligations.

Overview

California Senate Bill 942, known as the California AI Transparency Act (CAITA), was signed into law by Governor Gavin Newsom in September 2024. The Act is one of several AI-focused legislative measures enacted by California in 2024 and represents the state's effort to address the proliferation of AI-generated content, including synthetic media, text, images, audio, and video, through mandated transparency and disclosure mechanisms. The law targets developers of generative AI systems with over one million monthly users, requiring them to implement tools that enable users-and in some contexts, downstream recipients of AI-generated content-to determine whether content was produced by AI. The Act introduces obligations around AI content provenance, requiring covered developers to make available free-of-charge AI detection tools or to embed detectable signals (such as digital watermarks or metadata) in AI-generated outputs. Covered developers must also publish a manifest or disclosure describing the AI systems covered and the detection capabilities available. CAITA is distinct from and operates alongside other California AI legislation including AB 2013 (AI training data transparency), SB 1047 (large AI model safety requirements, partially vetoed), and AB 2602 (digital replica protections). Enforcement is vested in the California Attorney General, with civil penalty provisions for non-compliant developers. The Act reflects California's broader posture as a de facto national standard-setter for AI regulation in the United States, given the state's market size and the frequency with which California statutes influence enterprise compliance programs nationally and globally.

Key Requirements

  • Covered developers—those operating generative AI systems with one million or more monthly users—must provide a free, publicly accessible AI detection tool capable of identifying content generated by their systems.
  • Covered developers must embed latent disclosure signals (e.g., watermarks, metadata, or provenance information) in AI-generated content where technically feasible.
  • Developers must publish and maintain a disclosure manifest identifying covered AI systems and describing available detection and disclosure mechanisms.
  • Detection tools must be made available at no cost to users and third parties seeking to verify content provenance.
  • Developers must update detection tools and disclosures as their AI systems are materially updated.
  • Civil penalties may be imposed for violations, with enforcement by the California Attorney General.
  • Obligations apply to developers whose systems are accessed by California residents, regardless of the developer's state of incorporation or headquarters location.

What Your Organization Must Do

  • Audit all generative AI products and features by Q3 2025 to determine whether any system meets or approaches the one million monthly user threshold, assigning ownership of this assessment to the Chief Compliance Officer and relevant product leads.
  • Engage engineering and product teams immediately to design and implement latent disclosure signals such as watermarks or provenance metadata in AI-generated outputs, ensuring technical feasibility assessments are completed well before the January 1, 2026 effective date.
  • Build and publish a free, publicly accessible AI detection tool for each covered system prior to January 1, 2026, and establish a process for updating these tools whenever the underlying AI system undergoes a material change.
  • Draft and publish a disclosure manifest for each covered generative AI system that identifies the system, describes its detection capabilities, and links to the free detection tool, incorporating a review cycle tied to product release schedules.
  • Review all third-party generative AI vendor contracts by Q3 2025 to confirm that CAITA compliance obligations are addressed, adding representations, warranties, and indemnification provisions where absent.
  • Establish a monitoring protocol with Legal and Government Affairs to track California Attorney General enforcement guidance and any regulatory clarifications, and schedule an annual CAITA compliance review to incorporate updated thresholds or technical standards.

Playbook Guidance

Step-by-step implementation guidance for compliance teams.

Frequently Asked Questions

Which generative AI developers are covered under CAITA?
Developers of generative AI systems with one million or more monthly users whose systems are accessible to California residents. This applies regardless of where the developer is incorporated or headquartered, so non-California and non-US companies can be covered if their platforms reach California users.
What exactly does CAITA require developers to provide by January 1, 2026?
Covered developers must offer a free, publicly accessible AI detection tool, embed latent disclosure signals such as watermarks or provenance metadata in AI-generated outputs where technically feasible, and publish a disclosure manifest identifying covered systems and available detection mechanisms.
Does CAITA apply to companies headquartered outside California?
Yes. Obligations attach to any developer whose generative AI system is accessible to California residents and meets the one million monthly user threshold, regardless of the company's state of incorporation or physical location.
What are the penalties for failing to comply with the California AI Transparency Act?
CAITA authorizes civil penalties enforced by the California Attorney General. The Act does not create a private right of action, so enforcement runs through the AG's office rather than individual consumer lawsuits.
How does CAITA differ from SB 1047 and AB 2013 passed the same year?
CAITA focuses specifically on content transparency and detection obligations for consumer-facing generative AI systems. AB 2013 addresses training data disclosure, and SB 1047 targeted safety requirements for large AI model developers. The three laws operate independently and can apply simultaneously to the same company.
Do enterprise companies that deploy third-party generative AI tools need to worry about CAITA compliance?
Potentially yes. If a third-party generative AI system embedded in an enterprise product meets the usage threshold, the developer of that underlying system bears the primary obligation. However, enterprise buyers should confirm CAITA compliance in vendor contracts and assess whether their deployment materially affects the vendor's user counts.