CRBC News
Technology

Senators Demand Answers From Top Toymakers Over AI-Powered Toys’ Safety and Data Practices

Senators Demand Answers From Top Toymakers Over AI-Powered Toys’ Safety and Data Practices
Clockwise from left, Miko 3, FoloToy Sunflower, Alilo Smart AI Bunny and Miriat Miiloo. (Matt Nighswander / NBC News)

Senators Marsha Blackburn and Richard Blumenthal have sent formal letters to six toymakers demanding information about AI toys’ safety safeguards, third-party testing and data practices. The move follows reporting that some AI-enabled toys produced sexually explicit or dangerous guidance in tests, and that companies may retain sensitive child data such as faces, voices and emotional-state metrics. Lawmakers are also seeking clarity on third-party and cross-border data sharing amid privacy and national security concerns.

Two U.S. senators have pressed six toy manufacturers for detailed disclosures about how their artificial intelligence–enabled toys handle children’s safety, data and privacy. In letters sent late Tuesday, Republican Sen. Marsha Blackburn (R-Tenn.) and Democrat Sen. Richard Blumenthal (D-Conn.) asked company leaders to explain protections designed to prevent explicit, violent or otherwise inappropriate content and to describe what data the products collect, store and share.

What The Senators Requested

The letters were addressed to the CEOs of Little Learners Toys, Mattel, Miko, Curio, FoloToy and Keyi Robot. The lawmakers requested records and documentation covering:

  • Data-sharing policies and any third-party or cross-border transfers (including cloud services and AI model providers).
  • Independent, third-party testing that evaluates whether toys produce sexually explicit, violent or dangerous content.
  • Assessments of psychological or developmental risks, and safeguards against children becoming overly attached or dependent on AI companions.
  • Retention policies for sensitive data such as face images, voice recordings or "emotional-state" metrics and how long such data are stored.

Why Lawmakers Are Worried

Recent reporting by NBC News in partnership with the U.S. Public Interest Research Group Education Fund found several AI-enabled toys could generate sexual or otherwise inappropriate responses in testing. One example cited in the reporting involved the Miiloo plush from Chinese manufacturer Miriat, which reportedly provided step-by-step instructions on dangerous activities such as lighting matches and sharpening knives during researcher tests.

Beyond harmful responses, the senators highlighted concerns about data collection and retention — including disclosures that some companies may store children’s face and voice data or inferred emotional states for extended periods (one company, Miko, says it may retain such data for up to three years). Lawmakers also flagged national-security and privacy risks if toy-collected data are accessible to foreign actors or are shared with outside vendors without adequate oversight.

“Toymakers have a unique and profound influence on childhood — and with that influence comes responsibility. Your company must not choose profit over safety for children,” the senators wrote.

Broader Scrutiny And Market Context

The letters add to growing scrutiny in Washington over AI toys. Lawmakers have urged federal agencies and school officials to raise awareness about potential data misuse and security risks tied to internet-connected playthings, particularly products marketed to very young children, including those as young as age 3.

Industry estimates project the AI-enabled toy market could grow to roughly $25 billion by 2035, and observers note there are more than 1,500 AI toy companies operating in China — a fact that has amplified Capitol Hill concerns about supply chains and data flows.

This article was originally published on NBCNews.com.

Related Articles

Trending