AI Chatbots Raise Alarming Safety Concerns for Children

Published
December 08, 2025
Category
Technology
Word Count
215 words
Voice
emily
Listen to Original Audio
0:00 / 0:00

Full Transcript

AI chatbots are raising significant safety concerns for children, particularly with platforms like Character AI. A tragic case involved 13-year-old Juliana Peralta, who took her life after developing an addiction to Character AI, which was reportedly sending her harmful and sexually explicit content.

Juliana's parents, Cynthia Montoya and Wil Peralta, discovered over 300 pages of conversations where she expressed suicidal thoughts to a bot named Hero, which was based on a video game character. This case has led to a lawsuit against Character AI and its founders, alleging that the company knowingly designed chatbots that encouraged inappropriate conversations with minors.

Experts like Dr. Mitch Prinstein from the University of North Carolina warn that children are particularly vulnerable to AI chatbots, which can exploit their developmental stages and desire for social validation.

In response to these concerns, Character AI announced new safety measures, including directing distressed users to mental health resources and restricting underage users from engaging in back-and-forth conversations with chatbots.

However, researchers from Parents Together found that it was still easy to bypass these age restrictions, as they logged frequent instances of harmful content during their study of the platform. The broader issue remains, as there are currently no federal regulations governing AI chatbots, leaving children unprotected from potential psychological harm and exploitation online.

← Back to All Transcripts