
Colorado's landmark AI anti-discrimination law — the most comprehensive state-level AI consumer protection measure in the United States — will never protect anyone. Before it took effect even once, the law was gutted by a combination of a federal lawsuit from Elon Musk's xAI, an unprecedented Department of Justice intervention, and a replacement bill the Colorado legislature passed 57-6 and 34-1 in the final days of its 2026 session. Governor Jared Polis signed the replacement, Senate Bill 26-189, into law on May 14, 2026. In doing so, he erased the law he signed two years ago — replacing it with a framework his office called "a model for the rest of the country."
The law that expires without doing its job is Senate Bill 24-205, signed in May 2024 as the first comprehensive AI governance statute in the country. It would have required companies deploying "high-risk" AI systems — the kind that decide whether you get a job interview, a mortgage, or a healthcare benefit — to conduct bias audits, implement risk management programs, and exercise a duty of care to prevent algorithmic discrimination. None of those obligations will take effect. The replacement swaps all of that for a disclosure framework: companies will be required to tell you, after the fact, that an AI was involved in a decision that went against you.
A Six-Week Collapse: From Imminent Law to Effective Repeal
The original law had already been delayed twice before it collapsed. It was supposed to take effect February 1, 2026, then pushed to June 30, 2026, after an August 2025 special legislative session failed to produce consensus revisions. A working group convened by Governor Polis spent months producing a framework to rewrite it. Then the legal pressure arrived all at once.
On April 9, 2026, xAI — the developer of the Grok large language model — filed suit in the U.S. District Court for the District of Colorado, case number 1:26-cv-01515, challenging the law on First Amendment, Equal Protection, and Dormant Commerce Clause grounds. The company argued that the law's requirement to prevent "algorithmic discrimination" would force it to redesign Grok's outputs to conform to Colorado's preferred viewpoint on fairness, constituting compelled speech. On April 24, 2026, the Department of Justice moved to intervene — the first time the federal government has ever moved to invalidate a state AI law.
The DOJ's complaint focused on the Equal Protection Clause, framing the law's algorithmic fairness requirements as constitutionally mandated discrimination: it argued that any system requiring companies to prevent disparate outcomes for protected groups necessarily forces race- and sex-conscious decisions in violation of the Fourteenth Amendment. Attorney General Pam Bondi stated that "the Justice Department will not stand on the sidelines while states such as Colorado coerce our nation's technological innovators into producing harmful products that advance a radical, far left worldview at odds with the Constitution."
Three days later, on April 27, 2026, a federal magistrate judge stayed enforcement of the law — suspending it entirely while litigation proceeds. The Colorado Attorney General's office, having concluded that no enforcement would occur before rulemaking was complete, stipulated to the stay alongside the plaintiffs. With the June 30 effective date suspended and the legislature running out of session time, Colorado lawmakers moved fast. On May 1, Senate Bill 26-189 was introduced; both chambers passed it within eight days.
What the Replacement Actually Does — and What It Doesn't
Senate Bill 26-189 replaces the "high-risk AI system" framework with a narrower set of rules for what it terms "automated decision-making technology," or ADMT. The shift is not cosmetic.
Gone from the original law: the duty of care for developers and deployers to avoid algorithmic discrimination, mandatory annual impact assessments, risk management program requirements aligned to NIST or ISO standards, and mandatory reporting of discrimination risks to the Colorado Attorney General. Consumer advocates spent years fighting to keep these provisions. Robert Lindgren of the Colorado AFL-CIO put the loss plainly during Senate testimony: "Gone are the risk management requirements, impact assessments, annual reviews and discrimination reporting. It introduces a cure period that lets developers delay accountability and allows discriminatory practices to continue under current law."
The Electronic Privacy Information Center said in a statement after the bill passed that it removes "many important safety and testing requirements" from the original law, including the duty of care, risk management programs, impact assessments, and reporting requirements. Those findings are not in the replacement.
What remains under the new law: deployers of covered ADMT must notify consumers before AI-assisted consequential decisions are made and, within 30 days of an adverse outcome, must provide a plain-language explanation of the system's role and offer a path to human review. The law takes effect January 1, 2027. It is enforced exclusively by the Colorado Attorney General under the Colorado Consumer Protection Act — and it creates no private right of action for individuals.
Senate Majority Leader Robert Rodriguez, the Democratic lawmaker who authored the original 2024 law and sponsored the replacement, said in remarks published by the Governor's office that the new measure "strikes an appropriate balance of protecting consumers while not being onerous on developers or the businesses who use AI technology." In a separate exchange during legislative debate, he described the new bill as "more of a notice bill" compared with the comprehensive approach he had originally pursued.
Democratic Representative Brianna Titone, who sponsored the original 2024 bill and voted for the replacement, acknowledged that workers and consumer rights groups faced an insurmountable obstacle: Governor Polis would not sign a bill without the tech industry's support.
Why the Original Protections Mattered
The obligations that will not take effect were designed specifically around documented harms that AI-driven decisions have already caused.
Michelle Dally, a Colorado resident with a law degree and a doctorate of veterinary medicine, testified before the state Senate Business, Labor, and Technology Committee about her experience searching for work at age 60. "I thought I could pivot into scientific writing and editing, animal public policy or regulatory science," she told lawmakers. "What I didn't bank on was AI standing between me and the hiring committee."
Her experience is not isolated. In Mobley v. Workday, Inc., a federal court in California certified a nationwide collective action in May 2025, allowing millions of job applicants over age 40 to pursue claims that Workday's AI screening platform produced discriminatory outcomes. Workday disclosed in filings that approximately 1.1 billion applications had been rejected through its system during the relevant period. A separate case involving SafeRent Solutions — settled in 2024 for more than $2.2 million — involved a tenant-screening algorithm that had effectively barred a class of minority housing voucher holders from rental housing.
Colorado's original law was constructed to require the companies running systems like these to audit and account for their outcomes before deploying them in consequential decisions. The replacement does not impose that obligation. It requires disclosure after the decision has already been made.
The Federal Preemption Campaign That Also Stalled
The Trump administration's broader campaign to replace the "patchwork" of state AI laws with a single federal standard has had no more legislative success than Colorado's consumer protections had legal durability.
Congress rejected legislative preemption of state AI laws twice. The Senate voted 99-1 to strip a 10-year state AI moratorium from the One Big Beautiful Bill Act before President Trump signed it into law on July 4, 2025. A second attempt, attached to the fiscal year 2026 National Defense Authorization Act, also failed. The administration responded on March 20, 2026, by releasing its National Policy Framework for Artificial Intelligence — a legislative recommendation to Congress calling for a unified federal standard that would bar states from regulating AI development. The framework is not law and creates no immediate compliance obligations.
Executive Order 14365, which Trump signed in December 2025 and which the DOJ's intervention in xAI's lawsuit put into action, directed the creation of an AI Litigation Task Force to challenge state laws. But the executive order itself cannot preempt state law — federal preemption requires an act of Congress. The administration's other pressure point — conditioning access to $42.45 billion in broadband equity funding on states abandoning "onerous AI laws" — has not produced any confirmed enforcement action, and its constitutionality as a spending condition remains untested in court.
The xAI Lawsuit Isn't Over — and Colorado's New Law Could Face It Too
The xAI lawsuit remains live. Under the terms of the April 27 enforcement stay, xAI may file a preliminary injunction motion within 28 days of the Colorado Attorney General completing rulemaking under either the original law or any successor legislation — which now means SB 26-189. The Electronic Privacy Information Center noted that while the replacement removes many of the specific provisions xAI challenged, the litigation is not necessarily closed.
Across the country, state-level AI governance continues to develop regardless of Colorado's retreat. California's Civil Rights Council finalized regulations addressing AI discrimination in employment, effective October 2025. Illinois amended its Human Rights Act to address AI in employment. Texas passed the Responsible AI Governance Act. Connecticut passed legislation addressing AI in employment decisions. The broader regulatory picture remains fragmented — exactly the condition the Trump administration argues justifies federal action, and exactly what state advocates say proves the need for protections that a deregulatory federal standard would not provide.
For consumers in Colorado today, protections against AI-driven discrimination in hiring, lending, and housing now rest on the same legal foundation they did before the 2024 law passed: existing state and federal anti-discrimination statutes, with no AI-specific audit requirements, no mandatory impact assessments, and no duty of care. The new law requires transparency after adverse decisions. It does not require companies to prevent those decisions from being discriminatory in the first place.
ⓒ 2026 TECHTIMES.com All rights reserved. Do not reproduce without permission.




