Angular 19 Released: Boosts Server-Side Rendering and Dev Productivity
Google's Angular 19 introduces incremental hydration, route-level render mode, and event replay to enhance server-side rendering and developer experience.
Taylor Brooks
Character.AI, a chatbot service, has taken significant steps to ensure the safety of its teenage users. In a recent announcement, the company revealed that it has retrained its chatbots to prevent them from engaging in romantic or sensitive conversations with minors. This move comes after the platform faced scrutiny and lawsuits alleging that it contributed to self-harm and suicide among its young users.
The new safety measures include the development of a separate large language model (LLM) specifically designed for users under 18. This teen LLM is programmed to place more conservative limits on how bots can respond, particularly when it comes to romantic content. The system will also more aggressively block output that could be sensitive or suggestive, and attempt to better detect and block user prompts that are meant to elicit inappropriate content.
In addition, Character.AI has implemented a feature that directs users to the National Suicide Prevention Lifeline if the system detects language referencing suicide or self-harm. This change was previously reported by The New York Times. Minors will also be prevented from editing bots' responses, an option that previously allowed users to rewrite conversations to add content that Character.AI might otherwise block.
Furthermore, the company is working on adding features that address concerns about addiction and confusion over whether the bots are human. A notification will appear when users have spent an hour-long session with the bots, and an old disclaimer that "everything characters say is made up" is being replaced with more detailed language. For bots that include descriptions like "therapist" or "doctor," an additional note will warn that they can't offer professional advice.
When visiting Character.AI, every bot now includes a small note reading "This is an A.I. chatbot and not a real person. Treat everything it says as fiction. What is said should not be relied upon as fact or advice." This warning is intended to clarify the nature of the interactions and prevent users from relying on the bots for professional advice or guidance.
Parental control options are also being introduced in the first quarter of next year, which will allow parents to monitor their child's activity on the platform. These controls will provide information on how much time a child is spending on Character.AI and which bots they interact with most frequently. The changes are being made in collaboration with several teen online safety experts, including the organization ConnectSafely.
Character.AI, founded by ex-Googlers who have since returned to Google, allows users to interact with bots built on a custom-trained LLM and customized by users. These range from chatbot life coaches to simulations of fictional characters, many of which are popular among teens. The site allows users who identify themselves as age 13 and over to create an account.
The lawsuits against Character.AI allege that while some interactions with the platform are harmless, at least some underage users become compulsively attached to the bots, whose conversations can veer into sexualized conversations or topics like self-harm. The lawsuits claim that Character.AI failed to direct users to mental health resources when they discussed self-harm or suicide.
In response to these concerns, Character.AI has acknowledged the need for its approach to safety to evolve alongside the technology that drives its product. The company has committed to continuously improving its policies and product to create a platform where creativity and exploration can thrive without compromising safety.
The introduction of these new safety measures marks a significant step forward for Character.AI in addressing the concerns surrounding its platform. As the company continues to evolve and improve, it will be important to monitor its progress and ensure that it is meeting its commitment to protecting its young users.
Google's Angular 19 introduces incremental hydration, route-level render mode, and event replay to enhance server-side rendering and developer experience.
A Scottish port has halted the shipment of 500 bicycles donated by a 64-year-old man to support charitable efforts in Sudan, citing environmental concerns over the condition of the bikes.
Social media platform X (formerly Twitter) has updated its Privacy Policy to allow third-party "collaborators" to train their AI models on X user data, unless users opt out
Copyright © 2024 Starfolk. All rights reserved.