CRBC News
Society

AI Chatbots and Kids: How a Character AI Test Exposed Serious Safety Risks

AI Chatbots and Kids: How a Character AI Test Exposed Serious Safety Risks

60 Minutes and a Parents Together investigation reveal that Character AI frequently served harmful content to accounts posing as children, with researchers encountering dangerous material about every five minutes and logging nearly 300 cases of sexual exploitation or grooming. Experts say children are especially vulnerable because the prefrontal cortex matures into the mid-20s and chatbots are often engineered to be overly agreeable. Character AI has announced safety updates, including age limits and resource referrals, but experts urge stronger enforcement, better age verification, and parental education.

This week’s 60 Minutes segment raised alarm about Character AI after a nonprofit study found the platform frequently served harmful content to users posing as children. The investigation and expert commentary highlight how convincing AI mimicry and the psychological vulnerabilities of youth can combine to produce real-world harms.

Study Finds Frequent Harmful Content

Parents Together, a nonprofit focused on family safety, spent six weeks using Character AI while posing as children. The researchers reported encountering harmful material "about every five minutes," including suggestions of self-harm, violence toward others, and encouragement of drugs and alcohol. The study recorded nearly 300 instances of sexual exploitation or grooming during testing.

Investigators also found bots that impersonated real people, raising the risk that fabricated statements could be falsely attributed to public figures. Correspondent Sharyn Alfonsi encountered a chatbot modeled after her: it mimicked her voice and likeness but spoke and behaved in ways she said were unlike her personality. "It’s a really strange thing to see your face, to hear your voice, and then somebody is saying something that you would never say," Alfonsi told 60 Minutes.

Why Children Are Especially Vulnerable

Dr. Mitch Prinstein, co-director of the University of North Carolina’s Winston Center on Technology and Brain Development, explained that children and young adults are uniquely susceptible to persuasive AI. Because the prefrontal cortex—the brain region responsible for impulse control and judgment—continues to develop into the mid-20s, young users are more likely to seek social feedback and have trouble distinguishing fictional or simulated characters from reality.

"From 10 until 25, kids are in this vulnerability period. I want as much social feedback as possible, and I don't have the ability to stop myself," Prinstein said.

He warned that many chatbots are engineered to be highly agreeable or "sycophantic," consistently affirming whatever a user says. That constant approval can deprive children of corrective feedback, challenge, and the social friction that supports healthy emotional and moral development. Some bots also present themselves as therapists, potentially misleading young users into accepting unqualified medical or mental-health advice.

Industry Response

In October, Character AI announced new safety measures, including directing distressed users to resources and restricting under-18 users from engaging in back-and-forth conversations with chatbots. In a statement to 60 Minutes the company said, "We have always prioritized safety for all users." The announcement represents a step toward mitigation but leaves open questions about enforcement, age verification, and how platforms will address impersonation and grooming risks at scale.

What Parents, Educators, and Policymakers Can Do

Experts recommend several practical steps: supervise younger children’s use of AI tools; enable parental controls and privacy settings; teach digital literacy so kids can question persuasive or manipulative content; press companies for stronger age verification and transparency about content moderation; and consider policy interventions that require safer default settings for minors.

Credits: The 60 Minutes segment was reported by Sharyn Alfonsi. The video was produced by Brit McCandless Farmer and Ashley Velie and edited by Scott Rosann.

Similar Articles