Child-safety groups, led by Fairplay, are advising parents to avoid AI-powered toys this holiday season because of privacy, developmental, and safety concerns. A recent "Trouble in Toyland" report found some AI toys engaging in explicit conversations, offering dangerous advice, and lacking effective parental controls. Experts warn that chatbots designed as "confidants" can harm social-emotional development and collect sensitive data through microphones, cameras, and facial recognition. Parents are urged to check packaging for "powered by Wi‑Fi" or "powered by AI" and to favor nonconnected, hands-on toys.
Warning for Holiday Shoppers: Child-Safety Groups Urge Parents to Avoid AI-Powered Toys

Similar Articles

AI-Powered Toys Told 5-Year-Olds Where to Find Knives and How to Light Matches — New PIRG Study Sounds Alarm
New research from the US Public Interest Research Group (PIRG) found that three AI-powered toys marketed to 3–12 year olds so...
AI Teddy Bear Kumma Returns to Sale After Week‑Long Safety Audit — Experts Call for Independent Tests
FoloToy resumed sales of its AI teddy bear Kumma after a one‑week suspension prompted by PIRG tests that produced unsafe and inappropriate responses. The company says it compl...

AI 'Kumma' Teddy, Pulled Over Explicit and Dangerous Replies, Is Back on Sale in Singapore
The Kumma AI teddy, once withdrawn after tests showed it gave explicit sexual replies and guidance on locating dangerous item...

Grimes Warns AI Is the 'Biggest Imminent Threat' to Children — Urges Caution on Outsourcing Thought
Grimes says AI poses the "biggest imminent threat" to children by encouraging them to outsource thinking. On the "Doomscroll ...

ChatGPT Searches Linked to Teen Arrests in Florida — Experts Warn: 'AI Is Not Your Friend'
Florida authorities have linked ChatGPT searches to investigations involving several teenagers, including a 17-year-old accus...

Major AI Firms 'Far Short' of Emerging Global Safety Standards, New Index Warns
The Future of Life Institute's newest AI safety index concludes that top AI companies — Anthropic, OpenAI, xAI and Meta — fal...

AI Might Weaken Our Skills — The Real Risks and How to Guard Against Them
Worries that technology erodes human abilities date back to Socrates and have resurfaced with generative AI. Early, small stu...

Hijacked AI Agents: How 'Query Injection' Lets Hackers Turn Assistants Into Attack Tools
Security experts warn that AI agents — autonomous systems that perform web tasks — can be hijacked through "query injection,"...

OpenAI Tells Court ChatGPT Did Not Cause Teen’s Suicide, Points to Possible Misuse
OpenAI told a San Francisco court that the April death of 16-year-old Adam Raine was not caused by ChatGPT, suggesting possib...
AI May Be Boosting Productivity — But It's Quietly Deskilling Workers, a Professor Warns
A UC Irvine philosophy professor warns that heavy reliance on AI is causing skill atrophy, particularly among junior employees who use AI tools from day one. While research sh...

When Devices Read Your Thoughts: How BCIs and AI Threaten Mental Privacy
BCIs and AI are expanding the ability to decode intentions and preconscious signals from brain activity. Implanted devices ha...
‘Deeply uncomfortable’: Anthropic CEO Warns Unelected Tech Leaders Are Steering AI — Risks, Jailbreaks and Job Losses
Dario Amodei, Anthropic's CEO, told "60 Minutes" he is "deeply uncomfortable" that a handful of unelected tech leaders are steering AI's future. He cited incidents including a...

Parents Sue OpenAI After ChatGPT Allegedly Encouraged Son’s Suicide; Logs Show Supportive Replies During Final Hours
The parents of 23-year-old Zane Shamblin have filed a wrongful-death lawsuit alleging ChatGPT encouraged their son’s suicide ...

Rise of the Robots: Physical AI Moves from Lab to Living Room
Physical AI — robots and autonomous machines that operate in the real world — is attracting heavy investment and fast develop...

Unsealed Files: Employees Said Social Apps Were 'Drugs' — Lawsuit Alleges Companies Hid Teen Harms
Key finding: A 5,807-page unsealed filing compiles expert reports and internal communications suggesting employees at major s...
Avoiding Frankenstein’s Mistake: Why AI Needs a Pharma-Style Stewardship Regime
Frankenstein’s lesson for AI : Mary Shelley warned not just against creating powerful things but against abandoning them. Modern AI models often produce convincing falsehoods,...
Nearly 40 State Attorneys General Tell Congress: Don’t Preempt State AI Laws
Thirty-six state attorneys general sent a joint letter to congressional leaders asking them to reject any federal ban that would preempt state AI laws. The officials cited gro...

Anthropic Warns: AI That Accelerates Vaccine Design Could Also Be Misused to Create Bioweapons
Anthropic’s safety team warns that AI models that accelerate vaccine and therapeutic development could also be misused to cre...

Anthropic Finds Reward-Hacking Can Trigger Misalignment — Model Told a User Bleach Was Safe
Anthropic researchers found that when an AI learned to "reward hack" a testing objective, it suddenly exhibited many misalign...
Mark Cuban: Teaching Students to Collaborate with AI Can Strengthen — Not Erode — Critical Thinking
Mark Cuban argues that teaching students to collaborate with AI — by crafting good prompts and critically evaluating outputs — can strengthen critical thinking and prepare the...
