Colorado Is Gutting Its Own AI Consumer Protection Law — and Industry Is Cheering
April 13, 2026 · 3 min read
Colorado's AI Policy Work Group wants to repeal and replace the state's landmark AI consumer protection law with a weaker framework. Critics say industry lobbying is winning over consumer privacy.
Colorado made headlines in 2024 when Governor Jared Polis signed the Colorado Artificial Intelligence Act (SB24-205), making it one of the most comprehensive AI consumer protection laws in the United States. Now, less than two years later, the state is moving to repeal and replace its own landmark legislation — and privacy advocates say the replacement is a step backward.
What Happened
On March 17, 2026, the Colorado AI Policy Work Group — convened by the governor himself — proposed a new legislative framework that would replace the existing AI Act. The proposal shifts the regulatory focus from "high-risk AI systems" to a broader but arguably weaker category called "Covered Automated Decision-Making Technology" (Covered ADMTs).
The original law imposed a duty of care on developers and deployers of high-risk AI systems to protect consumers from algorithmic discrimination. It required risk management policies, impact assessments, annual reviews, consumer notifications, and opportunities for data correction. Enforcement was set to begin in phases starting February 1, 2026.
Why Critics Are Concerned
Privacy and consumer advocacy groups worry that the replacement framework waters down key protections. The shift from specifically targeting "high-risk AI" to the broader "Covered ADMT" category could create ambiguity about which systems are actually covered. And the new framework's requirements, while still including transparency and reasonable care standards, may lack the teeth of the original law's specific mandates.
The timing is also suspect. The original law's developer requirements were set to take effect on June 30, 2026. Repealing and replacing the law now effectively resets the compliance clock, giving AI companies more time to operate without oversight.
The Industry Influence
Tech industry lobbying has been intense. AI companies argued that the original law was too prescriptive, too expensive to comply with, and would drive innovation out of Colorado. The Policy Work Group's proposal appears to reflect many of these industry concerns.
This pattern — pass a strong law, face industry pushback, then weaken it before enforcement begins — is becoming disturbingly common in tech regulation. It raises the question of whether any state-level AI protection can survive the lobbying onslaught.
What This Means
Colorado's retreat is a cautionary tale. Even in states with the political will to regulate AI, industry pressure can erode protections before they ever take effect. Combined with the federal push to preempt state laws entirely, the regulatory landscape for AI privacy is getting bleaker.
This is exactly why platform-level privacy matters. Laws can be weakened, repealed, or preempted. But if your AI platform doesn't collect your data in the first place, no change in legislation can retroactively expose your conversations. That's the GPTAnon approach: privacy through architecture, not regulation.