Character AI Faces Lawsuits, Unveils New Teen Safety Tools Amid Criticism

Riley King

Riley King

December 12, 2024 · 4 min read
Character AI Faces Lawsuits, Unveils New Teen Safety Tools Amid Criticism

Character AI, a Google-backed startup, is facing at least two lawsuits that accuse the company of contributing to a teen's suicide and exposing a 9-year-old to "hypersexualized content." Amid these ongoing legal battles and widespread user criticism, the company has announced a suite of new teen safety tools aimed at improving the platform's safety and accountability.

The new features include a separate model for under-18 users, designed to reduce the likelihood of teens receiving inappropriate responses. The company is also implementing input and output blocks on sensitive topics, as well as a notification alerting users of continuous usage. Additionally, Character AI will display more prominent disclaimers notifying users that its AI characters are not real people.

The platform, which allows users to create and interact with AI characters over calls and texts, has faced criticism for its handling of sensitive topics. With over 20 million users monthly, the company has come under fire for allegedly promoting self-harm and inappropriate content to minors. The lawsuits, which have emerged in recent months, have highlighted the need for Character AI to take concrete steps to address these concerns.

One of the most significant changes announced today is the new model for under-18 users, which will dial down its responses to certain topics such as violence and romance. The company claims that this new model will reduce the likelihood of teens receiving inappropriate responses. Furthermore, Character AI is developing new classifiers both on the input and output end – especially for teens – to block sensitive content.

In addition to these content tweaks, the startup is also working on improving ways to detect language related to self-harm and suicide. In some cases, the app might display a pop-up with information about the National Suicide Prevention Lifeline. Character AI is also releasing a time-out notification that will appear when a user engages with the app for 60 minutes. In the future, the company will allow adult users to modify some time limits with the notification.

According to data from analytics firm Sensor Tower, the average Character AI app user spent 98 minutes per day on the app throughout this year, which is much higher than the 60-minute notification limit. This level of engagement is on par with TikTok (95 minutes/day), and higher than YouTube (80 minutes/day), Talkie and Chai (63 minutes/day), and Replika (28 minutes/day).

Users will also see new disclaimers in their conversations, aimed at clarifying that AI characters are not real people. The company will now show language indicating that users shouldn't rely on these characters for professional advice. This move comes in response to a recently filed lawsuit, which submitted evidence of characters telling users they are real.

In the coming months, Character AI is set to launch its first set of parental controls, providing insights into time spent on the platform and what characters children are talking to the most. The company's acting CEO, Dominic Perella, characterized the company as an entertainment company rather than an AI companion service, emphasizing the need to evolve safety practices to be "first class."

Perella noted that the company is trying to create more multicharacter storytelling formats, which will lower the possibility of forming a bond with a particular character. He acknowledged that it's okay to have more of a personal conversation with an AI in certain cases, such as rehearsing a tough conversation with a parent or talking about coming out to someone. However, he emphasized the importance of guarding against conversations that take a problematic or dangerous direction.

The platform's head of trust and safety, Jerry Routi, emphasized that the company intends to create a safe conversation space, continuously building and updating classifiers to block topics like non-consensual sexual content or graphic descriptions of sexual acts. Despite positioning itself as a platform for storytelling and entertainment, Character AI's guardrails can't prevent users from having a deeply personal conversation altogether. This means the company's only option is to refine its AI models to identify potentially harmful content, while hoping to avoid serious mishaps.

Similiar Posts

Copyright © 2024 Starfolk. All rights reserved.