From December 10, Australia will require major social platforms to take reasonable steps to stop users under 16 from accessing their services or face fines of up to A$49.5 million (about US$32 million). The rule targets sites that meet the government’s definition of an “age‑restricted social media platform,” and the list includes Snapchat, Facebook, Instagram, Kick, Reddit, Threads, TikTok, Twitch, X and YouTube.
How companies are responding
Meta has said it will begin deactivating existing under‑16 accounts and block new account creation on Facebook, Instagram and Threads from December 4, and is encouraging younger users to download and save their content. Snap says under‑16 users can deactivate accounts for up to three years or until they turn 16 — a change that also ends Snap streaks, the daily photo exchanges many teens use to show continued contact.
Verification, workarounds and limits
Age verification methods vary. Companies such as Yoti and Verifymy offer tools including phone or email checks, ID documents, and facial age estimation from a short selfie video. The legislation does not require users to upload government ID, but platforms may ask for identity documents in borderline cases. Anti‑spoofing technologies and liveness checks are increasingly used to detect fakes, while VPNs remain a possible but imperfect workaround for access because social platforms rely on localisation, connections and contextual signals.
Wider effects on young people and families
The timing — just before Australia’s long summer break — means many teens will face an eight‑week holiday without social feeds, direct messaging and algorithmic content. For some, like 14‑year‑old Maxine who deleted her social apps voluntarily, the break can feel liberating and more social in real life. For others, especially isolated or marginalised young people who use platforms to find support or communities, the ban risks cutting off important connections.
Safety advocates say the measure is long overdue. Cyber‑safety campaigner Kirra Pendergast of Ctrl+Shft has urged young people to back up photos and content and to use the transition as an opportunity to explore safer spaces online. At the same time, some parents and school leaders worry enforcement will be inconsistent, and that children will shift to less regulated platforms like Roblox, Discord or gaming networks where risks persist.
Politics, legal challenges and global interest
The law grew from public concern about young people’s mental health and online harm. Critics say the bill was rushed and politically motivated; the Digital Freedom Project has already filed a High Court challenge arguing it infringes young Australians’ speech rights. Communications Minister Anika Wells has defended the law and vowed to press on despite legal and political pushback.
Other countries are watching closely. Several European nations and the UK have rolled out stricter rules to protect children online, and some US states have enacted laws addressing youth social media use. Australia’s approach is among the most sweeping so far and could influence policy debates abroad.
What families and teens can do now
Practical steps include downloading and backing up important content, reviewing privacy settings on other apps, and identifying safer ways for teens to stay connected — including supervised platforms, local communities and offline activities. Advocates also urge policymakers and platforms to ensure outreach to isolated and vulnerable youths so the rule does not drive them to riskier, unregulated spaces.
Bottom line: Australia’s ban is an ambitious attempt to limit children’s exposure to potentially harmful content and addictive design. It will test how effectively tech companies can verify age at scale, how families adapt, and whether this model shapes future policy elsewhere.