TikTok is rolling out enhanced age‑verification technology across the EU after a year‑long pilot that led to the removal of thousands of suspected under‑13 accounts. The system uses profile data and behavioral signals to predict likely age, with flagged accounts reviewed by humans. Users can appeal with a government ID, facial age‑estimation, or credit‑card authorization; TikTok says the process complies with EU privacy rules. The company also enforces age‑appropriate defaults, including blocked DMs for under‑16s and a 60‑minute screen limit for under‑18s.
TikTok Rolls Out Enhanced Age‑Verification Across the EU to Curb Under‑13 Accounts

TikTok said Friday it will begin rolling out enhanced age‑verification technology across the European Union to better enforce its minimum age policy and keep children under 13 off the platform.
The deployment, scheduled over the coming weeks, follows a year‑long pilot during which the system automatically flagged accounts suspected of belonging to users under 13 for human review. TikTok said the pilot resulted in the removal of thousands of underage accounts.
How the System Works
The new system combines information provided by account holders with behavioral signals — such as profile details, types of uploaded videos and in‑app actions — to generate a prediction about whether an account is likely to belong to someone underage. Accounts the algorithm flags are then reviewed by human moderators before any enforcement action.
Appeals And Age Verification Options
Users whose accounts are referred by the automated system can appeal. TikTok said accepted verification methods include presenting a government‑issued ID, using a facial age‑estimation tool, or confirming an authorization on a credit card in the user’s name. The company described these measures as additional guardrails layered on top of its existing checks for false birth dates.
Privacy And Compliance
TikTok emphasized that the system is designed to comply with European Union data‑protection and privacy rules. It said information used to predict whether an account may belong to someone underage will be processed solely to determine whether to refer the account to moderators and will not be repurposed for other uses.
“By adopting this approach, we are able to deliver safety for teens in a privacy‑preserving manner. We take our responsibility to protect our community, and teens in particular, incredibly seriously,” TikTok said in a blog post.
Age‑Appropriate Defaults And Limits
Beyond initial checks, TikTok said it applies automatic, age‑appropriate settings for teen accounts. The company noted that teen profiles come with more than 50 preset safety, privacy and security features. Additional measures include blocking direct messages for users under 16, enforcing a 60‑minute daily screen‑time limit for under‑18s, and disabling notifications after a designated bedtime.
TikTok also uses predictive techniques to estimate whether a user falls within a narrower age band (for example, 13–15). If that estimate conflicts with the date of birth entered at sign‑up, moderators can move the account into a more suitable age experience.
Regulatory And Political Context
The announcement comes as EU regulators review how age‑verification technologies align with data‑protection rules and as international debate continues about restrictions on teen social media use following discussions in countries such as Australia about under‑16 rules. In the U.K., Prime Minister Keir Starmer said he would consider a similar restriction amid concerns about excessive smartphone use by children and teens, while noting earlier reservations about enforcement and unintended consequences.
TikTok framed the rollout as a move to strengthen child safety while respecting privacy and legal safeguards. The company said the measures are part of its broader effort to ensure younger users receive a more protective experience on the platform.
Help us improve.



























