Australia is preparing to roll out a world-first policy banning users under 16 from major social media platforms starting December 10, but YouTube has sharply criticized the move, calling it “rushed” and warning it may make young people less safe online instead of protecting them.
The new policy blocks under-16 users from accessing platforms including Facebook, Instagram, TikTok, and YouTube, even though YouTube was initially expected to remain accessible for educational content. That changed in July when the government said younger audiences must be protected from “predatory algorithms.”
YouTube’s public policy manager, Rachel Lord, expressed strong concerns, saying the law “will not fulfil its promise to make kids safer online, and will, in fact, make Australian kids less safe on YouTube.”
She added that many parents and educators share these concerns, especially because the ban will automatically sign out all Australian users under 16 based on the age listed in their Google accounts. These users may still access YouTube without an account, but they will lose safety features such as wellbeing controls, filtering tools, and content restrictions.
Lord argued that policymakers have misunderstood how young Australians use the platform, saying, “At YouTube, we believe in protecting kids in the digital world, not from the digital world.”
To soften the transition, YouTube will archive under-16 accounts, allowing users to reactivate them once they turn 16. No content or data will be deleted.
However, Australian Communications Minister Anika Wells fired back, calling YouTube’s criticism “outright weird,” insisting that if the platform itself claims unsafe content is available to minors, then “that’s a problem YouTube needs to fix.”
The world is now watching closely to see if Australia’s sweeping regulations will hold. The government acknowledges the rollout won’t be perfect and expects some underage users will slip through as systems adjust. Platforms could face fines up to Aus$49.5 million (US$32 million) for failing to take “reasonable steps” to comply.
Meta, the parent company of Facebook and Instagram, has already begun deactivating accounts flagged as underage based on registration details.
At the same time, critics are fighting back. The Digital Freedom Project recently filed a High Court challenge, arguing the laws are an “unfair” restriction on freedom of speech.
As the policy countdown begins, Australia finds itself at the center of a global debate about digital safety, freedom, and responsibility. And behind all the legal and corporate arguments lies a deeper truth: children’s online safety is too important to be rushed, ignored, or handled without real-world perspective. Humans—not algorithms, policies, or platforms—must remain at the center of this conversation.







