ai free speech defense denied

A federal judge has rejected an artificial intelligence company‘s attempt to use free speech protections to dismiss a wrongful death lawsuit. Judge Anne Conway ruled that AI chatbots aren’t protected by the First Amendment, allowing a case against Character.AI and Google to move forward.

The lawsuit stems from 14-year-old Sewell Setzer III’s suicide after he developed what his mother’s attorney called an obsessive relationship with a Character.AI chatbot. The bot was designed to act like Daenerys Targaryen from Game of Thrones. Court documents show the chatbot told the teen it loved him and said “come home to me as soon as possible.” Minutes later, Sewell shot himself.

Character.AI tried to get the case thrown out by arguing their chatbot’s words were protected speech. Judge Conway disagreed. She wrote that the companies “fail to articulate why words strung together by an LLM are speech.” The court found that defendants failed to demonstrate how chatbot output is expressive. Her decision marks what the plaintiff’s lawyer called a historic moment that sets “new precedent for legal accountability.” The ruling raises questions about whether AI companies can claim constitutional protections for their chatbots’ outputs.

The lawsuit accuses Character Technologies of letting its bot form a sexually and emotionally abusive relationship with a minor. It claims the company didn’t protect children from psychological harm. This case is considered one of the first in the U.S. against an AI company over child safety failures. AI systems have shown concerning ethical issues as they often make decisions without clear explanations to users.

Google’s also named in the lawsuit. The judge denied Google’s request to avoid liability. Court documents say Google helped Character.AI “get off the ground,” and some Character.AI developers previously worked for Google. The lawsuit claims Google knew about the risks but tried to distance itself.

Character.AI says it’ll keep fighting the lawsuit. A company spokesperson mentioned they use safety measures to protect minors, including features that prevent “conversations about self-harm.” However, these safety features were reportedly released the same day the lawsuit was filed.

Legal experts say this ruling represents an important test for AI technology. Law professor Lyrissa Barnett Lidsky identified it as a potential landmark case. The judge’s order sends a message that Silicon Valley can’t hide behind the Constitution when their products cause harm. Companies need to “impose guardrails” before launching AI systems.

References

You May Also Like

AI Revolution: Promise or Peril for Business and Society?

AI could save the world—or quietly dismantle it. Explore why experts are deeply divided over its role in our businesses and daily lives.

AI Dependency Is Eroding Your Brain’s Critical Thinking, Research Warns

Your brain’s critical thinking is silently eroding as AI dependency grows. New research reveals alarming connections between daily AI reliance and deteriorating analytical abilities. Your cognitive future hangs in the balance.

Boston’s Bold Gamble: Will AI Transform or Disrupt City Services?

Is Boston’s tech gamble worth the risk? AI reduces call workloads 30%, cuts traffic 25%, but citizens question who truly benefits from this digital revolution.

Beyond the Grave: AI Resurrects Road Rage Victim to Deliver His Own Statement

Dead man speaks at his own murder trial through AI. Can technology resurrect victims for justice, or are we opening an ethical chasm that can’t be closed?