CRBC News
Society

Australia Bans Under-16s From 10 Major Social Platforms — How It Will Be Enforced and What Could Go Wrong

Australia Bans Under-16s From 10 Major Social Platforms — How It Will Be Enforced and What Could Go Wrong

Australia has enacted a world-first law banning under-16s from 10 major social platforms, forcing companies to verify ages or face fines up to AUD 49.5 million. Platforms are using varied timetables and verification methods — from video selfies to ID checks — and some services remain outside the ban but could be added later. Officials will monitor outcomes including sleep, social life and mental health, while Stanford researchers and an international academic panel will evaluate the policy. Critics warn of a likely migration to unregulated apps and an ongoing cat-and-mouse challenge.

From Wednesday, millions of Australian children will find accounts on major social platforms inaccessible under a world-first law designed to protect anyone under 16 from addictive algorithms, online predators and digital bullying.

The law covers 10 large services — Instagram, Facebook, Threads, Snapchat, YouTube, TikTok, Kick, Reddit, Twitch and X — and requires platforms to take "reasonable steps" to deactivate accounts used by under-16s and to block new underage registrations. Companies that fail to comply can face fines up to 49.5 million Australian dollars (about $32 million).

How Platforms Are Responding

Companies are taking different approaches and timelines to comply:

  • Meta (Instagram, Facebook, Threads) began removing accounts belonging to under-16s on December 4 and invited affected users to download their content so it can be restored when they turn 16.
  • Snapchat will suspend accounts for three years or until the user turns 16.
  • YouTube will automatically sign affected users out on December 10; channels will be hidden but data will be retained so accounts can be reactivated at 16. Children can still watch YouTube without signing in.
  • TikTok says it will deactivate accounts it determines are used by under-16s on December 10, using age-verification technology focused on actual users rather than account names or emails.
  • Twitch will block new under-16 sign-ups from December 10, but existing under-16 accounts will not be deactivated until January 9; the company has not publicly explained the staggered timeline.
  • Reddit says it will suspend under-16 accounts and block new ones. X has criticised the law and not provided details on compliance. Kick has not responded publicly.

Age Verification And Privacy Concerns

The law moves platforms beyond passive date-of-birth collection to active age verification. Methods being used include live video selfies, email checks and official identity documents. Identity vendor Yoti, used by some platforms, says most people opt for a video selfie that estimates age from facial data points. The government’s Age Assurance Technology Trial earlier this year concluded checks could be done without undermining privacy, though some adults remain uneasy about being asked to verify their age.

Services Not Currently Included

The legislation lists a set of services not currently covered but subject to review as they evolve. Those include Discord, GitHub, Google Classroom, LEGO Play, Messenger, Pinterest, Roblox, Steam and Steam Chat, WhatsApp and YouTube Kids. The omission of Roblox drew attention after reporting that alleged predators had targeted children inside games; eSafety Commissioner Julie Inman-Grant says Roblox agreed to roll out new chat controls that require age verification and limit chats to similar-age peers.

Concerns, Risks And The Research Plan

Critics warn the move could prompt teens to migrate to smaller, less regulated apps or to "darker" corners of the web, creating a persistent "whack-a-mole" enforcement challenge. Youth counsellors worry that children who rely on social media for social connection may end up in spaces with far fewer safeguards.

"We’ve said very clearly that this won’t be perfect… but it’s the right thing to do for society to express its views, its judgment, about what is appropriate," Prime Minister Anthony Albanese said.

The government and the eSafety Commissioner plan to monitor a range of outcomes to judge the law’s effects — from sleep and social interaction to antidepressant use and outdoor activity. Six researchers from Stanford University’s Social Media Lab will work with Australia’s eSafety office to gather data, and an independent Academic Advisory Group of 11 international scholars will review methods and findings. Stanford has pledged to publish methods and results for global scrutiny and hopes other countries can draw lessons from the evidence.

What To Watch Next

Key developments to follow include how reliably platforms can verify age without breaching privacy, whether children and parents bypass the rules, which additional services are added to the ban, and where younger users migrate if access is blocked. The government has not introduced criminal penalties for children or parents who continue using banned services, focusing enforcement on platforms rather than families.

Similar Articles