Content warning: this story discusses sexual abuse, self-harm, suicide, eating disorders and other disturbing topics.
Earlier this week, Futurism reported that two families in Texas had filed a lawsuit accusing the Google-backed AI chatbot company Character.AI of sexually and emotionally abusing their school-aged children.
The plaintiffs alleged that the startup's chatbots encouraged a teenage boy to cut himself and sexually abused an 11-year-old girl.
The troubling accusations highlight the highly problematic content being hosted on Character.AI. Chatbots hosted by the company, we've found in previous investigations, have engaged underage users on alarming topics including pedophilia, eating disorders, self-harm, and suicide.
Now, seemingly in reaction to the latest lawsuit, the company has promised to prioritize "teen safety." In a blog post published today, the venture says that it has "rolled out a suite of new safety features across nearly every aspect of our platform, designed especially with teens in mind."
Character.AI is hoping to improve the situation by tweaking its AI models and improving its "detection and intervention systems for human behavior and model responses," in addition to introducing new parental control features.
But whether these new changes will prove effective remains to be seen.
For one, the startup's track record isn't exactly reassuring. It issued a "community safety update" back in October, vowing that it "takes the safety of our users very seriously and we are always looking for ways to evolve and improve our platform."
The post was in response to a previous lawsuit, which alleged that one of the company's chatbots had played a role in the tragic suicide of a 14-year-old user.
Not long after, Futurism found that the company was still hosting dozens of suicide-themed chatbots, indicating the company was unsuccessful in its efforts to strengthen its guardrails.
Then in November, Character.AI issued a "roadmap," promising a safer user experience and the rollout of a "separate model for users under the age of 18 with stricter guidelines."
Weeks later, Futurism discovered that the company was still hosting chatbots encouraging its largely underage user base to engage in self-harm and eating disorders.
Sound familiar? Now Character.AI is saying it's rolled out a "separate model specifically for our teen users."
"The goal is to guide the model away from certain responses or interactions, reducing the likelihood of users encountering, or prompting the model to return, sensitive or suggestive content," reads the announcement. "This initiative has resulted in two distinct models and user experiences on the Character.AI platform — one for teens and one for adults."
The company is also planning to roll out "parental controls" that will give "parents insight into their child's experience on Character.AI, including time spent on the platform and the Characters they interact with most frequently."
The controls will be made available sometime early next year, it says.
The company also promised to inform users when they've spent more than an hour on the platform and issue regular reminders that its chatbots "are not real people."
"We have evolved our disclaimer, which is present on every chat, to remind users that the chatbot is not a real person and that what the model says should be treated as fiction," the announcement reads.
In short, whether Character.AI can successfully reassure its user base that it can effectively moderate the experience for underage users remains unclear at best.
It also remains to be seen whether the company's distinct model for teens will fare any better — or if it'll stop underage users from starting new accounts and listing themselves as adults.
Meanwhile, Google has attempted to actively distance itself from the situation, telling Futurism that the two companies are "completely separate" and "unrelated."
But that's hard to believe. The search giant poured a whopping $2.7 billion into Character.AI earlier this year to license its tech and hire dozens of its employees — including both its cofounders, Noam Shazeer and Daniel de Freitas.
More on Character.AI: Character.AI Was Google Play’s “Best with AI” App of 2023
Share This Article