Technology

Australia Just Banned Social Media for Kids Under 16. Here's What That Means

Australia has enforced a world-first ban on social media use by anyone under 16, starting December 10, 2025. Platforms must verify users' ages using multiple methods or face fines up to $49.5 million

Martin HollowayPublished 2w ago4 min readBased on 19 sources
Reading level
Australia Just Banned Social Media for Kids Under 16. Here's What That Means

Australia Just Banned Social Media for Kids Under 16. Here's What That Means

On December 10, 2025, Australia became the first country to enforce a ban on social media use by anyone under 16. The law blocks roughly 1 million young people from accessing Facebook, Instagram, Snapchat, Threads, TikTok, X, YouTube, Reddit, Twitch, and Kick. Social media companies that fail to comply face fines up to $49.5 million AUD (about $34.4 million USD).

The responsibility falls on the platforms themselves. They have until December 2026 to set up systems that can verify users' ages. Unlike the past, when apps simply asked "How old are you?" and took users at their word, these new systems will require multiple forms of proof—think of it like needing an ID check at a bar, rather than just saying you're old enough.

How Platforms Are Responding

Google quickly announced that anyone under 16 in Australia would be logged out of YouTube on December 10. Meta did the same for Facebook, Instagram, and Threads. Users who are signed out would lose access to personalized features like saved playlists or feeds.

Some services remain available. Apps that are primarily for messaging—like WhatsApp—are still allowed. So are educational tools such as Google Classroom, gaming platforms, health services like Headspace, and YouTube Kids, which is designed specifically for younger viewers.

The platforms must report monthly to Australia's Online Safety Commissioner about how many children's accounts they've closed. These reports come with privacy rules to protect young people's personal information during the age-checking process.

Legal Fights and Real-World Challenges

Not everyone is accepting the ban without a fight. Reddit took the Australian government to court in December 2025 to challenge the law. By March 2026, Australia was investigating Meta, TikTok, YouTube, and Snapchat for potentially breaking the new rules.

Meanwhile, this law is part of a bigger conversation happening around the world about social media and children. In March 2026, a U.S. court ordered Meta to pay $375 million after finding the company failed to prevent child exploitation on Facebook, Instagram, and WhatsApp. Another U.S. court ruled that Meta and Google designed their apps in ways that can harm young people.

This Is Now Happening Elsewhere Too

Australia's move has inspired other countries to act. Malaysia announced a ban starting in 2026. Spain's leader said the country would restrict social media for under-16s. Greece plans to ban it for anyone under 15 beginning in January 2027. Slovenia and Denmark are writing similar laws. The UK is studying whether to do the same, and Canada is looking into whether restrictions are worth considering.

France, Spain, and Greece jointly asked the European Union in May 2025 to coordinate restrictions across all EU countries. This suggests the momentum is real and global.

How the Age-Checking Will Actually Work

The technical part matters because it affects what happens to your privacy. Platforms can't just ask your age and trust the answer. Instead, they'll use methods like scanning an ID document, checking biometric data such as facial recognition, or using a third-party service that verifies identity. Each method comes with tradeoffs: they can work, but they all collect personal information.

The broader context here is that this Australian law is part of a shift in how governments regulate tech companies. For decades, social media platforms largely made their own rules. What's changing now is that governments are stepping in with legal requirements and real financial penalties. If this sounds familiar to how Europe's privacy law (GDPR) changed how tech companies operate worldwide, that's because it works the same way—rules that start in one country often reshape how platforms work everywhere.

The 12-month timeline gives platforms time to build age-verification systems that work reliably without asking for more personal information than necessary. Whether they'll succeed in doing this fairly remains to be seen.