Character.ai bans teens from talking to its chatbots

Chatbot Character.ai is banning teens from having conversations with virtual characters, after facing heavy criticism over the types of interactions young people were having with their buddies online.

Millions use the platform, which was founded in 2021, to talk to AI-powered chatbots.

But it faces several lawsuits in the United States from parents, including one over the death of a teenager, which some described as an “accident.”Clear and present danger“For young people.

Now, Character.ai says that starting November 25, kids under 18 will only be able to create content like videos with their characters, instead of talking to them like they can currently.

Online safety campaigners welcomed the move, but said the feature should not have been available to children in the first place.

Character.ai said it was making the changes after “reports and feedback from regulators, safety experts and parents,” which highlighted concerns about chatbot interactions with teens.

Experts have previously warned that AI chatbots could make things up, over-empower and feign empathy, which could pose risks to young and vulnerable people.

“Today’s announcement is a continuation of our overall belief that we need to continue to build the safest AI platform on the planet for entertainment purposes,” Karandeep Anand, head of Character.ai, told BBC News.

He said AI safety was a “moving target” but the company took an “aggressive” approach to it Parental controls and handrails.

Internet safety group Internet Matters welcomed the announcement, but said safety measures should have been taken from the start.

“Our own research shows that children are exposed to harmful content and are at risk when interacting with artificial intelligence, including chatbots,” she said.

Character.ai has been criticized in the past for hosting potentially harmful or abusive chatbots that children can talk to.

Avatars have been found impersonating British teenagers Brianna Guy, who was murdered in 2023, and Molly Russell, who committed suicide at the age of 14 after viewing suicidal material online. It was discovered on the site in 2024 before it is taken down.

Later, in 2025, the Bureau of Investigative Journalism (TBIJ) found a chatbot based on pedophile Jeffrey Epstein that recorded more than 3,000 conversations with users.

The outlet reported ‘Bestie Epstein”s avatar continued to flirt with her reporter after they said they had kids. It was one of several bots reported by TBIJ that were later removed by Character.ai.

The Molly Rose Foundation – set up in memory of Molly Russell – questioned the platform’s motives.

“Once again, it took sustained pressure from the media and politicians to get a tech company to do the right thing, and Character AI appears to be choosing to act now before regulators act,” said Andy Burrows, the company’s CEO.

Anand said the company’s new focus was on providing “deeper gameplay”. [and] “Role-playing storytelling” features for teens — adding these features would be “much safer than what they might be able to do with an open-ended bot.”

New methods for age verification will also be coming, and the company will fund a new AI safety research lab.

Matt Navarra, a social media expert, said it was a “wake-up call” for the AI ​​industry, which is moving “from permissionless innovation to post-crisis regulation.”

He told BBC News: “When a platform building an experience for teens pulls the plug, it means filtered chats are not enough when the emotional pull of technology is strong.”

“It’s not about content coupons. It’s about how AI bots emulate real relationships and blur the lines for young users,” he added.

Mr. Navarra also said that a big challenge for Character.ai would be to create an engaging AI platform that teens would still want to use, rather than moving to “less safe alternatives.”

Meanwhile, Dr Nomisha Kurian, who has researched AI safety, said it was a “reasonable step” to restrict teenagers’ use of chatbots.

“It helps separate creative play from personal and emotionally sensitive exchanges,” she said.

“This is very important for young users who are still learning how to navigate emotional and digital boundaries.

“The new measures taken by Character.ai may reflect the maturity of the AI ​​industry – where children’s safety is increasingly recognized as an urgent priority for responsible innovation.”

Leave a Comment