gptAnon
AI Privacy Blog

The States Aren't Waiting: A Roundup of AI Laws Coming for Your Chatbots, Jobs, and Data in 2026

April 9, 2026 · 6 min read

While Congress debates, states are acting. Colorado, California, Washington, and South Dakota are all moving on AI regulation — from algorithmic discrimination rules to chatbot protections for kids. Here's what you need to know.

Washington Is Broken. The States Are Not.

If you're waiting for Congress to pass comprehensive AI regulation, you might be waiting a long time. But that doesn't mean nothing is happening. Across the country, state legislators are crafting their own AI rules — and some of them will directly affect how you use chatbots, how companies make decisions about your life, and what protections your kids have online.

Here's where things stand in the states leading the charge.

Colorado: The Algorithmic Discrimination Law

What it is: Colorado's SB 24-205, signed in May 2024, is the first comprehensive, risk-based AI regulation in the United States. It targets "high-risk" AI systems — those used to make consequential decisions about people's employment, housing, healthcare, education, insurance, financial services, or legal matters.

What it requires: Companies that build or use high-risk AI must implement risk management policies, conduct bias audits, complete impact assessments, notify consumers when AI is used in decisions affecting them, and provide a process to contest adverse AI decisions. If a company discovers algorithmic discrimination, it must notify the state attorney general within 90 days.

The delay: Originally set for February 2026, the effective date was pushed back to June 30, 2026, after lawmakers acknowledged businesses needed more time to prepare. Further revisions are still being considered.

Why it matters to you: If you apply for a job, a loan, an apartment, or insurance in Colorado, and an AI system plays a role in the decision, you'll have the right to know about it and to challenge the outcome. Violations can carry penalties of $20,000 or more per incident.

The wildcard: In December 2025, President Trump signed an executive order that could preempt state AI laws deemed inconsistent with federal policy. Colorado's law was specifically called out. The legal battle over federal vs. state authority on AI regulation is just beginning.

California: Transparency and Frontier AI Rules

California didn't pass just one AI law in 2025 — it passed seven. The two most significant:

The Training Data Transparency Act (AB 2013): Effective January 1, 2026, this law requires developers of generative AI systems intended for public use in California to publish high-level information about their training data. This is a big deal because training data has been one of the most closely guarded secrets in the AI industry. Companies like OpenAI and Google have been reluctant to disclose what text, images, and other content they used to train their models.

The Frontier AI Transparency Act (SB 53): Also effective January 1, 2026, this targets the biggest AI developers — those with annual revenue exceeding $500 million. It requires disclosure of risk management protocols and transparency reports about their most powerful models.

Why it matters to you: California's laws tend to set the standard nationally because companies find it easier to comply everywhere than to maintain separate systems for one state. If these transparency requirements stick, you'll eventually have much more visibility into how the AI tools you use were built and what data they were trained on.

Washington: Protecting Kids from AI Companions

Washington's HB 2225, signed by Governor Bob Ferguson on March 24, 2026, is one of the first laws specifically targeting AI companion chatbots — the kind of conversational AI that people (especially young people) form emotional relationships with.

Key protections for minors: Companies must disclose that users are talking to an AI (not a human) at least every hour for users under 18. If an operator knows the user is a minor, it must prevent the chatbot from generating explicit sexual content or using manipulative engagement techniques. AI chatbots cannot encourage self-harm, suicide, or eating disorders, and companies must create protocols for flagging dangerous conversations and connecting users with real mental health services.

The enforcement date: January 1, 2027 — giving companies time to implement the required changes.

Why it matters to you: If you have kids who use AI chatbots (and statistically, many teenagers do), Washington is setting the template for how these tools should be regulated. Expect other states to follow with similar legislation. The law also includes a private right of action, meaning families can sue companies that violate these protections.

The concern: Critics note the law is vague about age verification requirements. It imposes duties when an operator "knows" a user is a minor but doesn't specify what steps companies should take to determine age — a gap that could be exploited.

South Dakota: Deepfakes and AI-Generated Abuse

South Dakota might not be the first state you'd expect on the AI regulation frontier, but it's been surprisingly active:

Election deepfakes (2025): Governor Rhoden signed a law targeting AI deepfakes of politicians within 90 days of an election. Violations are a class one misdemeanor with penalties up to one year imprisonment and $2,000 in fines. Importantly, the law exempts satire and parody.

AI-generated pornography (2026): The state unanimously approved a bill prohibiting the creation and distribution of AI-generated pornographic material depicting non-consenting individuals. This is a direct response to the explosion of non-consensual deepfake pornography, which disproportionately targets women.

Why it matters to you: South Dakota's approach focuses on the most clearly harmful uses of AI — election manipulation and non-consensual intimate imagery. These are areas where broad public consensus exists, and the laws are likely to be replicated widely.

The Federal Wildcard

All of these state efforts exist under a growing cloud of uncertainty. President Trump's December 2025 executive order proposed a uniform federal AI policy framework that would preempt state laws deemed inconsistent. The executive order specifically mentioned Colorado's SB 205 and could potentially invalidate other state AI regulations as well.

The question of whether federal policy will preempt state AI laws is heading for the courts. In the meantime, states keep legislating — partly because they don't trust Washington to act, and partly because their constituents are demanding protection now.

What to Watch For

The patchwork of state AI laws creates a complicated compliance landscape for companies, but it also means innovation in regulation. Different states are trying different approaches, and we'll learn which ones actually work.

For regular people, the key takeaway is that your rights regarding AI are increasingly determined by where you live. If you're in Colorado, you have algorithmic discrimination protections. If you're in California, you'll get training data transparency. If you're in Washington, your kids have chatbot safety protections. And if you're in a state that hasn't acted yet? You're largely on your own — at least until Congress decides to do something about it.

Read without being tracked

GPTAnon lets you chat with AI models — ChatGPT, Claude, Gemini, and more — without creating accounts or having your conversations logged.

Start chatting anonymously →