aigovernance.com

Global AI Regulation & Framework Directory

← Directory

Colorado AI Act SB205

Colorado AI Act (SB 205) · Colorado General Assembly; signed by Governor Jared Polis

Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.

Overview

Senate Bill 205, signed into law by Governor Jared Polis on May 17, 2024, makes Colorado the first US state to enact a comprehensive, risk-based AI governance statute. The Act takes effect February 1, 2026. Modeled in part on the EU AI Act's risk-stratified approach, SB 205 focuses on 'high-risk artificial intelligence systems', defined as AI systems that make or substantially influence 'consequential decisions' affecting Colorado consumers in areas such as education, employment, financial services, essential government services, healthcare, housing, insurance, and legal services. The legislation imposes obligations on both 'developers' (those who create or substantially modify high-risk AI) and 'deployers' (those who use high-risk AI to make consequential decisions about consumers). Developers must provide deployers with documentation, use-case guidance, and risk information. Deployers must implement AI risk management policies, conduct annual impact assessments, provide pre-adverse-action notices, and offer consumers a meaningful opportunity to appeal or seek human review of adverse consequential decisions. The Attorney General of Colorado holds exclusive enforcement authority; the statute explicitly states it does not create a private right of action. Governor Polis's signing statement expressed concerns about the Act's breadth and signaled support for legislative amendments before the effective date to address compliance complexity for small businesses and startups. Subsequent legislative sessions have considered but not yet finalized amendments.

Key Requirements

  • Developers of high-risk AI must provide deployers with: intended uses, known limitations, documentation of training data, evaluation metrics, and risk mitigation guidance
  • Deployers must adopt and implement a written AI risk management policy aligned to a recognized framework (e.g., NIST AI RMF)
  • Annual algorithmic impact assessments required for each high-risk AI system, with results retained and available to the Attorney General upon request
  • Pre-adverse-action notice to consumers before a consequential decision takes effect, disclosing that AI was used and the principal reason(s) for the decision
  • Consumers have the right to appeal adverse consequential decisions and to request human review
  • Deployers must disclose to consumers when they interact with a high-risk AI system
  • Prohibition on discriminatory or unlawful algorithmic discrimination in consequential decisions
  • Developers and deployers must publicly disclose high-risk AI systems on their websites
  • Record retention obligations for impact assessments and related documentation
  • Exclusive enforcement by the Colorado Attorney General; no private right of action

Who It Affects

Developers of AI systems that are used for consequential decisions affecting Colorado consumersEnterprises deploying AI systems in employment, credit, insurance, healthcare, education, housing, or government services contexts in ColoradoHR technology vendors providing AI-driven hiring, promotion, or termination toolsFinancial institutions using AI for credit underwriting or insurance risk scoringHealthcare organizations using AI for clinical decision supportSmall businesses and startups developing or deploying AI — potential amendment carve-outs under discussionMultinational enterprises with operations or customers in Colorado

Effective Date

2026-02-01

Official source →