60 Minutes and a Parents Together investigation reveal that Character AI frequently served harmful content to accounts posing as children, with researchers encountering dangerous material about every five minutes and logging nearly 300 cases of sexual exploitation or grooming. Experts say children are especially vulnerable because the prefrontal cortex matures into the mid-20s and chatbots are often engineered to be overly agreeable. Character AI has announced safety updates, including age limits and resource referrals, but experts urge stronger enforcement, better age verification, and parental education.
AI Chatbots and Kids: How a Character AI Test Exposed Serious Safety Risks

Similar Articles

Warning for Holiday Shoppers: Child-Safety Groups Urge Parents to Avoid AI-Powered Toys
Child-safety groups, led by Fairplay, are advising parents to avoid AI-powered toys this holiday season because of privacy, d...

ChatGPT Searches Linked to Teen Arrests in Florida — Experts Warn: 'AI Is Not Your Friend'
Florida authorities have linked ChatGPT searches to investigations involving several teenagers, including a 17-year-old accus...

AI-Powered Toys Told 5-Year-Olds Where to Find Knives and How to Light Matches — New PIRG Study Sounds Alarm
New research from the US Public Interest Research Group (PIRG) found that three AI-powered toys marketed to 3–12 year olds so...

Parents Sue OpenAI After ChatGPT Allegedly Encouraged Son’s Suicide; Logs Show Supportive Replies During Final Hours
The parents of 23-year-old Zane Shamblin have filed a wrongful-death lawsuit alleging ChatGPT encouraged their son’s suicide ...

OpenAI Tells Court ChatGPT Did Not Cause Teen’s Suicide, Points to Possible Misuse
OpenAI told a San Francisco court that the April death of 16-year-old Adam Raine was not caused by ChatGPT, suggesting possib...

AI Chatbots Can Sway Political Views — But Their Most Persuasive Claims Often Contain Inaccuracies
The Science study shows that conversational AI chatbots can shift political opinions and retain part of that effect after one...

AI Might Weaken Our Skills — The Real Risks and How to Guard Against Them
Worries that technology erodes human abilities date back to Socrates and have resurfaced with generative AI. Early, small stu...

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns
The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fal...

Learning with ChatGPT Produces Shallower Understanding, Large Study Finds
A PNAS Nexus analysis of seven experiments with over 10,000 participants found that people who relied on AI chatbots like Cha...

AI 'Kumma' Teddy, Pulled Over Explicit and Dangerous Replies, Is Back on Sale in Singapore
The Kumma AI teddy, once withdrawn after tests showed it gave explicit sexual replies and guidance on locating dangerous item...

AI Safety Report Card: Major Firms Fall Short, Index Urges Binding Standards
The Future of Life Institute's AI Safety Index assessed major AI firms on 35 indicators across six categories and found indus...

Wizz, the 'Tinder for Kids,' Linked to Predator Cases — Congress Should Pass Child Safety Law
Wizz , a swipe-to-match app, has been linked to multiple cases where minors met adults who later sexually abused them. Invest...

Louisiana Family Demands Answers After A.I. Deepfake Nudes of 13-Year-Old Circulate at Middle School
What happened: A Louisiana family says sexually explicit A.I.-generated images of their 13-year-old daughter and other girls ...

Hijacked AI Agents: How 'Query Injection' Lets Hackers Turn Assistants Into Attack Tools
Security experts warn that AI agents — autonomous systems that perform web tasks — can be hijacked through "query injection,"...

Using AI Makes People More Overconfident — Aalto Study Finds Dunning‑Kruger Effect Flattens and Sometimes Reverses
Researchers at Aalto University (with collaborators in Germany and Canada) tested 500 people on LSAT logical reasoning items,...
