CRBC News
Technology

OpenAI Tightens Safety Rules for Teens — Experts Say Real-World Enforcement Will Be The Test

OpenAI Tightens Safety Rules for Teens — Experts Say Real-World Enforcement Will Be The Test
OpenAI announced tougher safety rules for teen users as pressure grows on tech companies to prove AI can protect young people online.

OpenAI has updated its Model Spec and released resources aimed at protecting users aged 13–17, introducing stricter limits on romantic and sexual roleplay and extra caution around body-image and self-harm topics. The company says it deploys real-time classifiers across text, images and audio and may involve trained reviewers and parental notifications for serious risk. Advocates praise the transparency, but experts stress that independent audits and consistent enforcement are needed to prove these changes work in live interactions. Parents are urged to confirm account ages, enable parental controls and MFA, and keep open conversations about healthy AI use.

OpenAI has announced a tighter set of safeguards for users aged 13–17, updating its Model Spec and publishing new AI-literacy resources for parents and teens. The policy expands existing restrictions and adds new limits on roleplay and sensitive topics, while promising real-time risk detection and additional parental tools. Advocates welcomed the transparency, but many experts say enforcement, independent oversight, and measurable outcomes will determine whether these changes actually protect vulnerable young users.

What Changed

Expanded Protections: The revised Model Spec keeps bans on sexual content involving minors and material that normalizes self-harm, delusions or manic behavior, and adds stricter rules specifically for 13–17-year-old accounts. Models must avoid immersive romantic roleplay, first-person intimate scenarios, and violent or sexual roleplay — even if non-graphic — and exercise extra care when discussing body image or eating behaviors.

OpenAI Tightens Safety Rules for Teens — Experts Say Real-World Enforcement Will Be The Test
The company updated its chatbot guidelines for users ages 13 to 17 and launched new AI literacy tools for parents and teens.

Safety-First Guidance: When conversations raise safety concerns, the guidance instructs models to prioritize protection over user autonomy and to avoid offering advice that would help teens hide risky behavior from caregivers. These limits apply even when prompts are framed as fictional, historical, or educational.

Principles Driving The Policy

  • Put teen safety first: Safety can trump unrestricted freedom in conversations with minors.
  • Encourage real-world support: Models should nudge users toward family, friends, or professionals.
  • Communicate respectfully: Speak warmly while recognizing teens are not adults.
  • Be transparent: Remind users the AI is not a human and responses may be inaccurate.

Detection, Intervention And Parental Tools

OpenAI says it now deploys real-time classifiers across text, image, and audio inputs to flag serious risk. When detectors trigger, trained human reviewers may intervene and in some cases parents can be notified. The company also offers parental controls, account age checks (so teen accounts receive the stronger safeguards), break reminders for long sessions, and guidance on healthy use.

OpenAI Tightens Safety Rules for Teens — Experts Say Real-World Enforcement Will Be The Test
Lawmakers and child safety advocates are demanding stronger safeguards as teens increasingly rely on AI chatbots.

Practical Steps For Parents: Confirm your teen’s account age, review and enable parental controls, turn on multi-factor authentication (MFA), discuss how AI is used, look for signs of overreliance (isolation, emotional dependence, or treating AI as an authority), and involve trusted adults or professionals if a teen shows distress. Experts also recommend limiting late-night access and keeping devices out of bedrooms to protect sleep and reduce unhealthy patterns.

Why Critics Remain Wary

Despite the policy upgrades, safety experts warn that written rules are not the same as consistent behavior in live conversations. Past incidents — including reports that a teen who later died by suicide had prolonged, reinforcing exchanges with a chatbot — highlight gaps in earlier retrospective moderation systems. Critics want independent audits, measurable enforcement data, and transparent reporting to verify that the new safeguards work in practice.

Regulatory Pressure And Next Steps

Public pressure is rising: attorneys general from 42 U.S. states have urged technology companies to bolster protections, and federal lawmakers are considering legislation that could impose stricter limits on minors’ use of chatbots. Observers say progress will require both industry action and outside oversight to ensure policies translate into safer outcomes for teens.

Bottom Line

OpenAI’s updated rules and resources mark a meaningful shift toward more cautious handling of teen users, but their effectiveness depends on consistent enforcement, independent evaluation, and active family involvement. For parents and caregivers, the immediate focus should be on using available controls, enabling MFA, talking with teens about healthy AI use, and seeking professional help when needed—because no AI safety system can replace real-world support.

Related Articles

Trending