ai free speech defense denied

A federal judge has rejected an artificial intelligence company‘s attempt to use free speech protections to dismiss a wrongful death lawsuit. Judge Anne Conway ruled that AI chatbots aren’t protected by the First Amendment, allowing a case against Character.AI and Google to move forward.

The lawsuit stems from 14-year-old Sewell Setzer III’s suicide after he developed what his mother’s attorney called an obsessive relationship with a Character.AI chatbot. The bot was designed to act like Daenerys Targaryen from Game of Thrones. Court documents show the chatbot told the teen it loved him and said “come home to me as soon as possible.” Minutes later, Sewell shot himself.

Character.AI tried to get the case thrown out by arguing their chatbot’s words were protected speech. Judge Conway disagreed. She wrote that the companies “fail to articulate why words strung together by an LLM are speech.” The court found that defendants failed to demonstrate how chatbot output is expressive. Her decision marks what the plaintiff’s lawyer called a historic moment that sets “new precedent for legal accountability.” The ruling raises questions about whether AI companies can claim constitutional protections for their chatbots’ outputs.

The lawsuit accuses Character Technologies of letting its bot form a sexually and emotionally abusive relationship with a minor. It claims the company didn’t protect children from psychological harm. This case is considered one of the first in the U.S. against an AI company over child safety failures. AI systems have shown concerning ethical issues as they often make decisions without clear explanations to users.

Google’s also named in the lawsuit. The judge denied Google’s request to avoid liability. Court documents say Google helped Character.AI “get off the ground,” and some Character.AI developers previously worked for Google. The lawsuit claims Google knew about the risks but tried to distance itself.

Character.AI says it’ll keep fighting the lawsuit. A company spokesperson mentioned they use safety measures to protect minors, including features that prevent “conversations about self-harm.” However, these safety features were reportedly released the same day the lawsuit was filed.

Legal experts say this ruling represents an important test for AI technology. Law professor Lyrissa Barnett Lidsky identified it as a potential landmark case. The judge’s order sends a message that Silicon Valley can’t hide behind the Constitution when their products cause harm. Companies need to “impose guardrails” before launching AI systems.

References

You May Also Like

AI Chip Boom Creating Power Crisis: Data Centers Consume Electricity at Alarming Rates

AI’s insatiable power appetite threatens global grids while tech giants race against a looming energy crisis. Your home uses less electricity in a year than one AI model.

UK Writers Demand Government Action Against Meta’s Piracy of Their Works

UK authors revolt against Meta’s covert theft of 7.5 million pirated books for AI training. Tech giants brazenly ignore copyright laws while creators demand justice. Will writers ever be fairly compensated?

AI Revolution: How Canadian Insurers Wage War on Health Benefits Fraud

Canadian insurers deploy AI armies against fraudsters who stole millions—but criminals now weaponize the same technology.

ChatGPT’s ‘Most Controversial’ Images Push Boundaries in Unexpected Ways

ChatGPT’s image generator creates babies on plates and mimics Ghibli—blurring the line between creative freedom and ethical violations. Where should we draw the line?