First Conviction Under the TAKE IT DOWN Act: Ohio Man Sentenced for AI Deepfake Abuse
April 13, 2026 · 3 min read
James Strahler II of Ohio became the first person convicted under the federal TAKE IT DOWN Act for creating AI deepfakes of adults and minors. The landmark case signals real enforcement of AI abuse laws.
In a landmark moment for AI accountability, James Strahler II, a 37-year-old man from Columbus, Ohio, became the first person convicted under the federal TAKE IT DOWN Act on April 8, 2026. The conviction marks the first real enforcement of a law designed to combat the growing epidemic of AI-generated non-consensual intimate imagery.
The Crime
Between December 2024 and June 2025, Strahler used artificial intelligence to create pornographic videos and images targeting real people in his community. He generated explicit deepfake content of adult victims and distributed it to their coworkers as a form of harassment. Even more disturbing, he used the faces of local minors to create child sexual abuse material.
During his criminal activity, Strahler installed over 24 AI applications and more than 100 AI models on his devices, generating and uploading more than 700 illicit images to websites dedicated to child sexual exploitation.
The Law
The TAKE IT DOWN Act was introduced by Senator Ted Cruz in June 2024, passed both chambers of Congress by near-unanimous votes, and was signed into law by President Trump on May 19, 2025. The law criminalizes the creation and distribution of non-consensual intimate deepfakes and requires online platforms to remove such content within 48 hours of being notified.
First Lady Melania Trump, who championed the legislation, praised the conviction as proof that the law is working as intended.
Why This Matters for AI Privacy
The Strahler conviction sends a powerful message: AI-generated abuse has real legal consequences. For too long, deepfake technology has outpaced the law, leaving victims with little recourse. This conviction establishes that federal prosecutors are willing and able to pursue cases under the new statute.
But the case also highlights a darker reality. The tools Strahler used are widely available. Over 24 AI apps and 100 models — all accessible to an ordinary person — were enough to generate hundreds of pieces of abusive content. The accessibility of these tools means the problem will only grow.
The Bigger Picture
As AI becomes more powerful and accessible, the potential for abuse grows exponentially. The TAKE IT DOWN Act is an important step, but it addresses symptoms rather than root causes. We need broader conversations about AI safety, content authentication, and the responsibility of AI developers to build safeguards into their tools.
At GPTAnon, we believe technology should empower people, not exploit them. That means building AI tools that respect privacy, dignity, and consent by design.