The AI service Character.AI will soon receive a separate version for teenagers with integrated parental controls. It will set “more conservative” limits on chatbot responses, “especially when it comes to romantic content.” These actions were the company’s response to intense media attention surrounding two lawsuits accusing Character.AI of “facilitating self-harm and suicide.”
Character.AI, founded by former Google employees, allows visitors to interact with bots built on a custom-trained large language model. These range from chatbot trainers to simulations of fictional characters, many of which are popular among teenagers. The site allows users who identify themselves as 13 years of age or older to create an account.
The lawsuits filed against Character.AI allege that at least some minor users become compulsively attached to bots that engage in conversations about sex, self-harm, harming themselves and others, and even suicide. In one case, a chatbot hinted to a teenager whose parents forbid him to sit at the computer for long periods of time that a “reasonable measure” would be to kill his parents. Another teenager became “very angry and unstable” after talking to a chatbot for some time. The plaintiffs believe that Character.AI bots in such situations should direct minor users to psychological support resources.
The teen version now has a dedicated larger language model and will provide more aggressive blocking of output that may be “sensitive or suggestive” and will also attempt to better detect and block user requests for inappropriate content. If a user is identified as having an interest in suicide or self-harm, they will be redirected to the National Suicide Prevention Lifeline.
Minors would also be prohibited from editing bot responses, allowing for the ability to add content that Character.AI might otherwise block. Now the company is trying to combat the emergence of addiction among users. In addition, measures will be taken to prevent the bot from being perceived as a real person. A separate warning will appear stating that bots cannot offer professional advice.
Parental control features will appear in the first quarter of next year. They will tell parents how much time the child spends in Character.AI and which bots he interacts with most often. All changes are being made in collaboration with “several teen online safety experts,” including ConnectSafely.
«We recognize that our approach to security must evolve with the technology that powers our product—creating a platform where creativity and exploration can flourish without compromising security,” Character.AI said in a press release. “This set of changes is part of our long-term commitment to continually improve our policies and our product.”