AI Governance Institute logo
AI Governance Institute

AI governance intelligence, tracked daily

← AI Governance Playbook

Question 28 of 34

What AI regulations apply to a US-based SaaS company?

Mapping the federal, state, and international AI regulatory requirements that apply to US SaaS companies offering AI features, based on use case and customer location.

If you only do 3 things, do this:

  1. 1.Your EU customer base determines whether the EU AI Act applies to you. If you have EU customers, map your AI features against the risk tiers — this is not optional.
  2. 2.Use case drives US obligations more than geography. Employment, credit, healthcare, and education AI are each governed by specific rules regardless of where your company is based.
  3. 3.Colorado SB 205, Illinois BIPA, and NYC Local Law 144 are the most likely state-level rules to affect US SaaS companies today. More are coming.

The Situation

Who this is for: Legal, compliance, and product teams at US SaaS companies offering AI features or building AI-powered products

When you need this: When building AI features, entering new markets, or when a customer asks about regulatory compliance

The Decision

Which AI regulations actually apply to our product and customer base, and what do they require us to do?

The Steps

  1. 1Map your customer locations: US-only, EU, UK, or global — this determines jurisdictional scope
  2. 2Map your AI use cases: employment, credit, healthcare, education, and financial services each have sector-specific rules
  3. 3For US customers: check applicable state laws (Colorado SB 205 for high-risk AI deployers; Illinois BIPA for biometric data; NYC Local Law 144 for hiring AI)
  4. 4For EU customers: assess EU AI Act risk tiers for your features and review GDPR automated decision-making obligations under Article 22
  5. 5For any use case involving personal data decisions: assess FCRA, CCPA, and applicable state privacy laws
  6. 6Build a regulatory applicability matrix and update it when you add use cases or enter new markets

The Artifacts

  • US AI regulatory landscape map (federal + key state laws)
  • Sector-specific AI rule checklist (financial services, healthcare, employment)
  • EU AI Act applicability assessment for SaaS providers
  • Regulatory applicability matrix template (product features × applicable regulations)

The Output

A documented regulatory map for your specific product and customer base, identifying which rules apply, what they require, and a process to update it as you expand.

Federal-level rules and agency guidance

At the federal level, there is no comprehensive US AI regulation equivalent to the EU AI Act. Instead, existing laws apply to AI through agency guidance and enforcement. The FTC has issued guidance on AI in advertising, consumer products, and fraud, and has brought enforcement actions against AI-related deceptive practices. The EEOC has published guidance on AI in employment decisions under Title VII and the ADA. The CFPB applies FCRA and ECOA requirements to AI used in credit decisions.

For SaaS companies, the most immediately relevant federal rules are the FCRA (if your AI influences credit, employment, housing, or insurance decisions), Title VII and ADA guidance (if your AI is used in hiring or HR contexts), and FTC consumer protection requirements (if your AI makes consumer-facing claims). The SEC has signaled that AI-related disclosures in public filings are subject to existing materiality requirements.

State laws that matter today

Colorado's AI Act (SB 205) is the most comprehensive US state AI law currently in effect. It applies to developers and deployers of high-risk AI systems that affect Colorado consumers, requiring impact assessments, transparency notices, and discrimination protections. If your product makes consequential decisions about Colorado residents in employment, education, credit, housing, healthcare, or legal services, Colorado SB 205 likely applies.

Illinois' Biometric Information Privacy Act (BIPA) applies to AI that processes biometric data, including facial recognition. Violations have resulted in significant class action settlements. NYC Local Law 144 requires bias audits by independent auditors for automated employment decision tools used in hiring decisions affecting New York City residents. California has multiple AI-related bills progressing through the legislature. Building to the most stringent applicable standard — typically Colorado plus NYC for US-focused SaaS — positions you well against most incoming state requirements.

EU and international exposure

If you have European customers, the EU AI Act applies to you regardless of where your company is based. The key question is whether your AI features would be classified as high-risk under Annex III. Features used in employment decisions (resume screening, performance evaluation), credit or insurance decisions, or education are likely high-risk. Features used in customer service, marketing, or productivity tools are typically limited or minimal risk, with only transparency obligations.

GDPR Article 22 restricts solely automated decision-making that produces legal or similarly significant effects on EU data subjects. If your SaaS product makes or significantly influences decisions about EU users, you need a compliant legal basis, a process for human review on request, and the ability to explain the decision. Many SaaS companies have this obligation but have not implemented the required processes.