The First Comprehensive US state AI law Is About To Be Gutted And Rebuilt
Here’s what might change, what it means, and what you should actually do about it.
The Colorado AI Policy Work Group, convened by Governor Polis, just released a unanimous framework (yes, it was release via Google Drive) to repeal and replace the original Colorado AI Act (SB 24-205). The Governor’s office announcement frames it as a consensus win. And on the surface, it looks like a simple cleanup. It’s not.
This is the most significant shift in the US AI regulatory landscape since the original law was signed in May 2024. And if you’re building an AI governance program the implications are both practical and strategic. Let’s walk through what happened, what actually changed, what’s surprising, and what you should do about it.
The Backstory (Quick Version)
In May 2024, Colorado signed SB 24-205 into law. It was modeled loosely on the EU AI Act, focused on preventing “algorithmic discrimination” in high-risk AI systems making consequential decisions in housing, lending, employment, healthcare, education, and insurance.
The law required developers and deployers to exercise “reasonable care” to prevent algorithmic discrimination, conduct annual impact assessments, implement risk management programs aligned with frameworks like NIST AI RMF or ISO 42001, disclose risks to the AG within 90 days, and give consumers robust post-decision transparency and appeal rights. Compliance was tied to a rebuttable presumption of reasonable care.
Governor Polis signed it reluctantly. In the signing letter, he asked legislators to come back and fix it. Industry groups pushed back. The US Chamber of Commerce objected. Palantir eventually cited the law as a factor in moving its headquarters from Denver to Miami.
Then came the special session drama. In August 2025, what was supposed to be a substantive rewrite collapsed after a week of intense lobbying and late-night Capitol negotiations that the ABA’s Business Law Today described as a dramatic showdown complete with backroom deals and last-minute collapses. Despite multiple bills, the only thing that passed was SB 25B-004, a simple find-and-replace: “February 1, 2026” became “June 30, 2026.”
So Polis convened a working group. Consumer advocates, hospitals, school districts, tech companies, venture capitalists—all at the same table, meeting weekly since October, behind closed doors. And yesterday, they delivered.
What Actually Changed
The duty of care is gone. SB 24-205’s core obligation was”reasonable care” to prevent algorithmic discrimination and it has been replaced by procedural requirements. Developers provide documentation. Deployers give notice and post-adverse disclosures within 30 days. Consumers get data correction rights and meaningful human review. The operative theory shifted from “prevent discrimination” to “tell people what you’re doing and give them recourse.” This is now a transparency regime, not an anti-discrimination regime.
Impact assessments are gone. No pre-deployment assessment. No annual review. No 90-day modification trigger. Three-year record retention is the new accountability mechanism. If you’ve been building an impact assessment program for Colorado specifically, that mandate just evaporated (EU AI Act, California bias testing rules, and basic defensibility all still demand it).
The scope got surgically narrower. “High-risk AI system” becomes “Covered ADMT” (Automated Decision-Making Technology) that must “materially influence” a consequential decision which is defined as a non-de minimis factor that affects the outcome. General-purpose tools like ChatGPT are excluded if they’re not configured for consequential decisions and carry an acceptable use policy prohibiting that use. Your scoping exercise just went from “does this AI touch a consequential decision” to “does it materially change the outcome of one.”
Liability got split. Instead of the joint-and-several liability that torpedoed the special session, the framework allocates fault based on relative responsibility under existing anti-discrimination law. Developers are only liable when their tool was used as intended and documented. But here’s the provision that should trigger contract renegotiations: indemnification clauses shielding a party from its own discriminatory acts are void as against public policy. You can’t contract your way out of discrimination you caused.
Enforcement is AG-only with a 90-day cure. No private right of action. The AG gets exclusive authority but must give 90 days to cure before seeking penalties (unless the violation was knowing or repeated). Post-adverse disclosure rules will be defined through AG rulemaking by December 31, 2026.
What’s Unexpected (Or Revealing)
The consumer advocates agreed. They traded a duty of care, mandatory impact assessments, and algorithmic discrimination as a standalone concept for a transparency-and-notice regime. That’s a massive concession.
“Algorithmic discrimination” is gone from the statute. The term that made Colorado’s law unique doesn’t appear in the new framework. Discrimination liability now flows entirely through existing civil rights law (the Colorado Anti-Discrimination Act). That’s a fundamentally different theory of harm and it makes Colorado look a lot more like every other states.
The timing is strategic. This drops right as the Department of Commerce was supposed to deliver its report identifying “onerous” state AI laws per the Trump EO. By slimming from an anti-discrimination framework to a transparency regime, Colorado may be making itself a harder target for federal preemption.
What to Do With This
Keep doing impact assessments. The EU AI Act requires them. California makes bias testing relevant to discrimination claims. NIST AI RMF assumes them. One state dropping a mandate doesn’t change the calculus for a defensible program.
Build around transparency as the floor. Colorado is converging with Illinois (AI disclosure, effective January 2026), California’s CCPA automated decision-making rules, and NYC’s Local Law 144. Notice, recourse, and audit trails are the common denominator. Build for that and you’re covered in most places.
Renegotiate your vendor contracts. Fault allocation based on relative responsibility is where this is heading. Your AI procurement contracts need shared accountability.
Plan for two timelines. The new framework targets January 1, 2027. But if the bill doesn’t pass the legislature, the original SB 24-205 takes effect June 30, 2026.
Don’t mistake deregulation for derisking. The new framework says it explicitly: using AI doesn’t excuse noncompliance with any existing law. The AI-specific layer got thinner. The foundation didn’t move.
The Bigger Picture
Colorado was the proof of concept for comprehensive state-level AI regulation in the US. Two years later, the comprehensive part is likely being stripped out. The strategic read: the US is not getting a Colorado-style duty of care or mandatory impact assessment regime at the state level anytime soon. Not because nobody wants it but because federal preemption threats, industry lobbying, and interstate competition for tech companies make it politically unsustainable.
What the US is getting is a floor of transparency, notice, and existing civil rights law applied to AI. That means your governance program needs a different foundation than what practitioners expected 18 months ago. The organizations that navigate this well will treat transparency as an accelerant for trust.
Key Links:

