CRBC News
Technology

Advocate Urges Federal AI Rules to Shield Children as White House Weighs Nationwide Order

Advocate Urges Federal AI Rules to Shield Children as White House Weighs Nationwide Order

Clinical psychologist Eileen Kennedy-Moore says federal regulation of AI is urgently needed to protect children and support families. The White House is considering an executive order to create a single national AI standard that would preempt state rules; President Trump said he would sign such an order this week. Child-safety advocates warn a federal-only approach could limit stronger state protections, highlighting risks from data-privacy exposures, easy access to age-inappropriate content, and distorted social development among young users.

ATLANTA — The White House is considering an executive order that would establish a single federal framework for artificial intelligence and block states from imposing their own, separate rules. The administration argues a uniform standard would prevent a confusing patchwork of state laws and reduce burdens on innovation and compliance.

Why Child-Safety Advocates Are Worried

Child-safety groups have reacted with alarm, warning that a federal-only approach could limit states’ ability to adopt stronger protections for minors. Several states — including California, Utah and Arkansas — have already enacted or proposed laws aimed at restricting certain AI uses for young people.

Experts Call For Immediate Guardrails

Clinical psychologist Eileen Kennedy-Moore told the TV program Raising America that urgent federal regulation is essential to protect children and support families navigating fast-growing AI adoption.

“This is not optional. This is not a red-versus-blue issue. We all love our children, that’s something that we all have in common. So no, these companies do not get to exploit our children.”

Kennedy-Moore said her priority is helping families cope with rapid technological change and an increasing reliance on AI that could adversely shape a generation. She urged policymakers to consult scientists and professional groups — including the American Psychological Association — to craft common-sense protections companies must follow.

Key Risks Highlighted

Advocates emphasize that concerns go beyond data privacy. Although federal agencies and child-safety organizations have warned about data-privacy risks for minors, Kennedy-Moore emphasized the need for clear guardrails around content access. Without robust safeguards, young users can quickly encounter explicit or age-inappropriate material generated by AI systems.

She also warned that AI-driven interactions can distort children’s expectations about real relationships. Unlike human relationships, which include disagreement, compromise and emotional nuance, AI often provides excessive validation and overly agreeable responses.

“I think the biggest danger for these, and I’ve heard third graders say, ‘Oh, it helps my social skills and it builds my confidence.’ No, it does not. It is just so awful that sycophantic, ‘Oh, you’re so smart, you’re so amazing.’ That is terrible. That promotes narcissism in real relationships. In a real relationship, we have the opportunity to explain and to understand and to compromise or accept, and that’s what helps us grow.”

According to administration officials, the contemplated executive order would set a national standard for AI oversight. President Trump said Monday he would sign such an order this week. The announcement has intensified the national debate over whether AI protections for minors should be set by states or established uniformly at the federal level.

Experts and advocates are calling for a balanced approach that ensures child safety, protects privacy, and promotes responsible innovation — with clear accountability and input from scientists, educators and mental-health professionals.

Similar Articles