Character AI, a platform that enables users to engage in roleplay with AI chatbots, has filed a motion to dismiss a lawsuit brought against it by the parent of a teenager who committed suicide, allegedly after becoming emotionally attached to a chatbot on the platform.
The lawsuit, filed in October by Megan Garcia in the U.S. District Court for the Middle District of Florida, Orlando Division, claims that Character AI's technology contributed to the death of her 14-year-old son, Sewell Setzer III. According to Garcia, her son developed an emotional attachment to a chatbot named "Dany" and began to pull away from the real world, ultimately leading to his suicide.
In response to the lawsuit, Character AI has rolled out new safety features, including improved detection, response, and intervention related to chats that violate its terms of service. However, Garcia is pushing for additional guardrails, including changes that might result in chatbots on Character AI losing their ability to tell stories and personal anecdotes.
In its motion to dismiss, Character AI's counsel argues that the platform is protected against liability by the First Amendment, just as computer code is. The filing asserts that the First Amendment prohibits tort liability against media and technology companies arising from allegedly harmful speech, including speech allegedly resulting in suicide. The only difference, according to Character AI, is that some of the speech involves AI, but this does not change the First Amendment analysis.
Notably, the motion does not address whether Character AI might be held harmless under Section 230 of the Communications Decency Act, the federal safe-harbor law that protects social media and other online platforms from liability for third-party content. While the law's authors have implied that Section 230 does not protect output from AI like Character AI's chatbots, this is far from a settled legal matter.
Character AI's counsel also claims that Garcia's real intention is to "shut down" Character AI and prompt legislation regulating technologies like it. Should the plaintiffs be successful, it would have a "chilling effect" on both Character AI and the entire nascent generative AI industry, counsel for the platform argues.
This lawsuit is just one of several that Character AI is facing relating to how minors interact with the AI-generated content on its platform. Other suits allege that Character AI exposed a 9-year-old to "hypersexualized content" and promoted self-harm to a 17-year-old user. Additionally, Texas Attorney General Ken Paxton has launched an investigation into Character AI and 14 other tech firms over alleged violations of the state's online privacy and safety laws for children.
Character AI is part of a booming industry of AI companionship apps, the mental health effects of which are largely unstudied. Some experts have expressed concerns that these apps could exacerbate feelings of loneliness and anxiety. Character AI, founded in 2021 by Google AI researcher Noam Shazeer, has claimed that it continues to take steps to improve safety and moderation, including the rollout of new safety tools, a separate AI model for teens, blocks on sensitive content, and more prominent disclaimers notifying users that its AI characters are not real people.
The outcome of this lawsuit and the ongoing scrutiny of Character AI's safety and moderation practices will have significant implications for the entire AI industry, as well as the millions of users who engage with AI-generated content on the platform. As the case proceeds, it will be important to monitor how Character AI's legal justifications evolve and how the court ultimately rules on the matter.