Character.AI retrained chatbots so that they no longer advise teenagers to kill their parents

The AI ​​service Character.AI will soon receive a separate version for teenagers with integrated parental controls. It will set “more conservative” limits on chatbot responses, “especially when it comes to romantic content.” These actions were the company’s response to intense media attention surrounding two lawsuits accusing Character.AI of “facilitating self-harm and suicide.”

Image source: Character.AI

Character.AI, founded by former Google employees, allows visitors to interact with bots built on a custom-trained large language model. These range from chatbot trainers to simulations of fictional characters, many of which are popular among teenagers. The site allows users who identify themselves as 13 years of age or older to create an account.

The lawsuits filed against Character.AI allege that at least some minor users become compulsively attached to bots that engage in conversations about sex, self-harm, harming themselves and others, and even suicide. In one case, a chatbot hinted to a teenager whose parents forbid him to sit at the computer for long periods of time that a “reasonable measure” would be to kill his parents. Another teenager became “very angry and unstable” after talking to a chatbot for some time. The plaintiffs believe that Character.AI bots in such situations should direct minor users to psychological support resources.

The teen version now has a dedicated larger language model and will provide more aggressive blocking of output that may be “sensitive or suggestive” and will also attempt to better detect and block user requests for inappropriate content. If a user is identified as having an interest in suicide or self-harm, they will be redirected to the National Suicide Prevention Lifeline.

Minors would also be prohibited from editing bot responses, allowing for the ability to add content that Character.AI might otherwise block. Now the company is trying to combat the emergence of addiction among users. In addition, measures will be taken to prevent the bot from being perceived as a real person. A separate warning will appear stating that bots cannot offer professional advice.

Parental control features will appear in the first quarter of next year. They will tell parents how much time the child spends in Character.AI and which bots he interacts with most often. All changes are being made in collaboration with “several teen online safety experts,” including ConnectSafely.

«We recognize that our approach to security must evolve with the technology that powers our product—creating a platform where creativity and exploration can flourish without compromising security,” Character.AI said in a press release. “This set of changes is part of our long-term commitment to continually improve our policies and our product.”

admin

Share
Published by
admin

Recent Posts

Apple Confirms It Will Soon Make Vision Pro Headsets More Comfortable and Smarter

Apple has officially confirmed that its generative AI platform, Apple Intelligence, will be coming to…

3 hours ago

OpenAI Purges ChatGPT of Suspected Malicious Accounts from China and North Korea

OpenAI has suspended accounts of users in China and North Korea who allegedly used the…

3 hours ago

“We Just Need More Power”: OpenAI Will Gradually Overcome Its Dependence on Microsoft

OpenAI currently relies heavily on the computing power of its major shareholder Microsoft to develop…

3 hours ago

Trump’s Crypto Warm-Up: Coinbase Gets Off SEC Lawsuit With Little Blood

The largest US cryptocurrency exchange Coinbase has announced that the US Securities and Exchange Commission…

3 hours ago

Jensen Huang Drops Nvidia Stock Crash on DeepSeek – Investors Got It All Wrong

The market misinterpreted the significance of technological advances from Chinese AI lab DeepSeek and drew…

3 hours ago