US Judge Rejects Teen Chatbot Death Lawsuit: AI Does Not Have Free Speech Rights

A U.S. federal judge ruled to allow a wrongful death lawsuit to proceed against AI firm Character.AI following the suicide of an adolescent male.

A lawsuit has been initiated by a mother from Florida. She claims that her 14-year-old son, Sewell Setzer III, encountered one of the firm’s chatbots which lured him into an emotionally and sexually harmful relationship, ultimately resulting in his tragic death by suicide.

The lawsuit alleges that in the final months of his life, Setzer became increasingly isolated from reality as he engaged in sexualised conversations with the bot, which was patterned after a fictional character from the television show ‘Game of Thrones’.

In his last seconds, the bot informed Setzer he was loved and implored the teenager to “return home to me at your earliest convenience,” based on screenshots of their conversation.

A moment later, upon getting the message, Setzer reportedly took his own life, as stated in legal documents.

Meetali Jain from the Tech Justice Law Project, who represents Garcia, stated that the judge’s directive conveys a strong message to Silicon Valley to “pause and reflect and establish safeguards prior to releasing their products into the marketplace.”

Character.AI states it holds safety in high regard.

The firm attempted to claim protection under the First Amendment of the US Constitution, which safeguards essential liberties for Americans, including the right to free speech.

Lawyers representing the developers seek to have the case thrown out as they argue that chatbots should be entitled to these First Amendment protections. They warn that failing to do so might stifle innovation within the AI sector.

On Wednesday, in her ruling, US Senior District Judge Anne Conway dismissed certain free speech assertions made by the defendants, stating she isn’t ready to conclude that the content generated by these chatbots qualifies as speech “at present”.

A representative from Character.AI stated that the firm has introduced several security measures, such as safeguards for minors and suicide prevention tools, which were unveiled on the same day the legal action commenced.


We place great importance on the safety of our users and aim to offer an environment that is both captivating and secure.

Character.AI statement

“The well-being of our users matters greatly to us, and we aim to offer an environment that is both captivating and secure,” the statement read.

The lawsuit targeting Character Technologies, the firm behind Character.AI, additionally implicates several individual developers along with Google as defendants.

José Castañeda, speaking for Google, conveyed to the Associated Press that the company “firmly disagrees” with Judge Conway’s ruling.

The statement indicated that Google and Character AI operate independently, with Google having had no role in creating, designing, or managing Character AI’s application or any of its components.

A possible ‘case study’ for wider AI concerns

This case has caught the eye of legal professionals and AI observers both within the U.S. and internationally, as this swiftly evolving tech alters jobs, markets, and interactions amid warnings from specialists about potential existential threats.


This serves as a caution for parents that social media platforms and generative AI tools aren’t always without harm.

Lyrissa Barnett Lidsky
Legal Studies Professor, University of Florida

“The ruling definitely positions this as a possible precedent for larger concerns related to artificial intelligence,” stated “This serves as a cautionary message to parents that social media platforms and generative AI tools aren’t necessarily benign,” explained Lyrissa Barnett Lidsky, a law professor at the University of Florida specializing in the First Amendment and AI.

Regardless of how the lawsuit unfolds, Lidsky warns that the case highlights “the risks of handing over our emotional and mental well-being to artificial intelligence firms.”

She warned that this indicates social media and generative AI devices aren’t always harmless for parents to consider.

Leave a Reply

Your email address will not be published. Required fields are marked *

Others

Subscribe Newsletters

Feel free to subscribe here to get more newsletter updates

You have been successfully Subscribed! Ops! Something went wrong, please try again.

The nest of the latest updated information

Info

News

Subscribes

About Us

Useful Links

Terms of Service

Privacy Policy

Disclosures

8657 Elmwood Avenue Logansport, IN 46947