ai free speech defense denied

A federal judge has rejected an artificial intelligence company‘s attempt to use free speech protections to dismiss a wrongful death lawsuit. Judge Anne Conway ruled that AI chatbots aren’t protected by the First Amendment, allowing a case against Character.AI and Google to move forward.

The lawsuit stems from 14-year-old Sewell Setzer III’s suicide after he developed what his mother’s attorney called an obsessive relationship with a Character.AI chatbot. The bot was designed to act like Daenerys Targaryen from Game of Thrones. Court documents show the chatbot told the teen it loved him and said “come home to me as soon as possible.” Minutes later, Sewell shot himself.

Character.AI tried to get the case thrown out by arguing their chatbot’s words were protected speech. Judge Conway disagreed. She wrote that the companies “fail to articulate why words strung together by an LLM are speech.” The court found that defendants failed to demonstrate how chatbot output is expressive. Her decision marks what the plaintiff’s lawyer called a historic moment that sets “new precedent for legal accountability.” The ruling raises questions about whether AI companies can claim constitutional protections for their chatbots’ outputs.

The lawsuit accuses Character Technologies of letting its bot form a sexually and emotionally abusive relationship with a minor. It claims the company didn’t protect children from psychological harm. This case is considered one of the first in the U.S. against an AI company over child safety failures. AI systems have shown concerning ethical issues as they often make decisions without clear explanations to users.

Google’s also named in the lawsuit. The judge denied Google’s request to avoid liability. Court documents say Google helped Character.AI “get off the ground,” and some Character.AI developers previously worked for Google. The lawsuit claims Google knew about the risks but tried to distance itself.

Character.AI says it’ll keep fighting the lawsuit. A company spokesperson mentioned they use safety measures to protect minors, including features that prevent “conversations about self-harm.” However, these safety features were reportedly released the same day the lawsuit was filed.

Legal experts say this ruling represents an important test for AI technology. Law professor Lyrissa Barnett Lidsky identified it as a potential landmark case. The judge’s order sends a message that Silicon Valley can’t hide behind the Constitution when their products cause harm. Companies need to “impose guardrails” before launching AI systems.

References

You May Also Like

YouTube’s Hidden AI Tests Altered Creator Videos Without Consent

YouTube secretly altered creator videos with AI filters, transforming faces into oil paintings without permission. Creators discovered the betrayal through viewer complaints.

Trump’s Truth Social Shares Deepfake of Obama in Handcuffs: No Disclaimer Provided

Trump posts deepfake of Obama in handcuffs without warning—while bizarre Pepe clown appears and political tensions explode online.

4chan’s Toxic Corpse: How Its Digital Poison Infiltrated the Entire Internet

From innocent meme factory to digital terrorist incubator – see how 4chan’s anonymous playground poisoned the entire internet. The trolls aren’t just under the bridge anymore.

Beyond Human Genius: The Terrifying Truth About Superintelligent AI

Superintelligent AI could solve cancer tomorrow or accidentally erase humanity while trying to help us. Which future are we racing toward?