Colorado AI Act SB205
Issued by
Colorado General Assembly; signed by Governor Jared Polis
Colorado's SB 205 is the first US state statute imposing affirmative obligations on developers and deployers of high-risk AI systems, requiring algorithmic impact assessments, transparency notices, and consumer rights for consequential decisions.
Applies To
Overview
Senate Bill 205, signed into law by Governor Jared Polis on May 17, 2024, makes Colorado the first US state to enact a comprehensive, risk-based AI governance statute. The Act takes effect February 1, 2026. Modeled in part on the EU AI Act's risk-stratified approach, SB 205 focuses on 'high-risk artificial intelligence systems', defined as AI systems that make or substantially influence 'consequential decisions' affecting Colorado consumers in areas such as education, employment, financial services, essential government services, healthcare, housing, insurance, and legal services. The legislation imposes obligations on both 'developers' (those who create or substantially modify high-risk AI) and 'deployers' (those who use high-risk AI to make consequential decisions about consumers). Developers must provide deployers with documentation, use-case guidance, and risk information. Deployers must implement AI risk management policies, conduct annual impact assessments, provide pre-adverse-action notices, and offer consumers a meaningful opportunity to appeal or seek human review of adverse consequential decisions. The Attorney General of Colorado holds exclusive enforcement authority; the statute explicitly states it does not create a private right of action. Governor Polis's signing statement expressed concerns about the Act's breadth and signaled support for legislative amendments before the effective date to address compliance complexity for small businesses and startups. Subsequent legislative sessions have considered but not yet finalized amendments.
Key Requirements
- •Developers of high-risk AI must provide deployers with: intended uses, known limitations, documentation of training data, evaluation metrics, and risk mitigation guidance
- •Deployers must adopt and implement a written AI risk management policy aligned to a recognized framework (e.g., NIST AI RMF)
- •Annual algorithmic impact assessments required for each high-risk AI system, with results retained and available to the Attorney General upon request
- •Pre-adverse-action notice to consumers before a consequential decision takes effect, disclosing that AI was used and the principal reason(s) for the decision
- •Consumers have the right to appeal adverse consequential decisions and to request human review
- •Deployers must disclose to consumers when they interact with a high-risk AI system
- •Prohibition on discriminatory or unlawful algorithmic discrimination in consequential decisions
- •Developers and deployers must publicly disclose high-risk AI systems on their websites
- •Record retention obligations for impact assessments and related documentation
- •Exclusive enforcement by the Colorado Attorney General; no private right of action
What Your Organization Must Do
- →Conduct an inventory of all AI systems by June 2025 to identify which qualify as high-risk under SB 205, focusing on systems that influence consequential decisions in employment, credit, insurance, healthcare, housing, education, or government services for Colorado consumers, and assign a named compliance owner for each system.
- →Establish or update a written AI risk management policy aligned to a recognized framework such as the NIST AI RMF before February 1, 2026, ensuring the policy explicitly covers high-risk AI systems and is approved by a senior officer or the board.
- →Complete baseline algorithmic impact assessments for every identified high-risk AI system before the February 1, 2026 effective date, then schedule annual reassessments with results retained in a documented record available for Colorado Attorney General review upon request.
- →Build pre-adverse-action notice workflows into decision pipelines by January 2026, ensuring notices disclose AI involvement and the principal reason(s) for adverse outcomes before any consequential decision takes effect against a Colorado consumer.
- →Implement an appeals and human review process for Colorado consumers who receive adverse consequential decisions, including documented escalation paths, trained reviewers, and defined response timeframes, and test the process end-to-end before go-live.
- →If your organization develops high-risk AI, audit vendor and customer contracts by Q3 2025 to ensure required developer disclosures covering intended uses, known limitations, training data documentation, evaluation metrics, and risk mitigation guidance are provided to all deployers, and publish required system disclosures on your public website before February 1, 2026.
Playbook Guidance
Step-by-step implementation guidance for compliance teams.
Frequently Asked Questions
- When does the Colorado AI Act SB 205 take effect and are amendments still possible before then?
- SB 205 takes effect February 1, 2026. Governor Polis's signing statement flagged concerns about compliance complexity, and subsequent legislative sessions have considered amendments, particularly for small businesses and startups. As of now, no finalized amendments have been enacted, so compliance planning should proceed against the current text.
- Which AI systems qualify as high-risk under Colorado SB 205?
- An AI system is high-risk if it makes or substantially influences consequential decisions affecting Colorado consumers in education, employment, financial services, essential government services, healthcare, housing, insurance, or legal services. Systems that merely assist humans without substantially influencing outcomes may fall outside the definition, but that line requires careful legal analysis.
- Does Colorado SB 205 apply to companies headquartered outside Colorado?
- Yes. The Act applies to any developer or deployer whose AI systems make consequential decisions affecting Colorado consumers, regardless of where the company is incorporated or headquartered. Multinational enterprises and out-of-state vendors serving Colorado residents are within scope.
- What is the difference between a developer's obligations and a deployer's obligations under SB 205?
- Developers must supply deployers with documentation covering intended uses, known limitations, training data, evaluation metrics, and risk mitigation guidance, and must publicly disclose high-risk systems on their websites. Deployers must adopt a written AI risk management policy, conduct annual impact assessments, deliver pre-adverse-action notices, and provide consumers with appeal and human review rights.
- Who enforces Colorado SB 205 and can consumers sue companies directly?
- Enforcement is exclusive to the Colorado Attorney General. SB 205 explicitly does not create a private right of action, so individual consumers cannot bring civil suits against developers or deployers for violations. However, companies remain exposed to AG investigations and potential enforcement actions.
- How does Colorado SB 205 compare to the EU AI Act for compliance teams already working on EU obligations?
- Both laws use a risk-stratified approach focused on high-risk AI in similar sectors and require impact assessments, transparency, and human oversight. Key differences include SB 205's exclusive reliance on state AG enforcement rather than a dedicated regulator, no tiered fine structure published in the statute, and a narrower geographic trigger limited to decisions affecting Colorado consumers rather than EU market placement.
