Connect with us
[bsa_pro_ad_space id=12]

Digital ID

The End of Online Anonymity? Australia’s New Law Pushes Digital ID for Everyone To Ban Kids From Social Media

Published

11 minute read

 By

Australia is gearing up to roll out some of the world’s strictest social media rules, with Parliament having pushed through legislation to bar anyone under 16 from creating accounts on platforms like Facebook, Instagram, Snapchat, and TikTok. It’s a sweeping measure but, as the ink dries, the questions are piling up.

Prime Minister Anthony Albanese’s Labor government and the opposition teamed up on Thursday to pass the new restrictions with bipartisan enthusiasm. And why not? Opinion polls show a whopping 77% of Australians are behind the idea. Protecting kids online is an easy sell which is why it’s often used to usher in the most draconian of laws. Still, the devil—as always—is in the details.

Proof of Age, But at What Cost?

Here’s the crux of the new law: to use social media, Australians will need to prove they’re old enough. That means showing ID, effectively ending the anonymity that’s long been a feature (or flaw, depending on your perspective) of the online experience. In theory, this makes sense—keeping kids out of online spaces designed for adults is hardly controversial. But in practice, it’s like using a sledgehammer to crack a walnut.

For one, there’s no clear blueprint for how this will work. Will social media platforms require passports and birth certificates at sign-up? Who’s going to handle and secure this flood of personal information? The government hasn’t offered much clarity and, until it does, the logistics look shaky.

And then there’s the matter of enforcement. Teenagers are famously tech-savvy, and history has shown that banning them from a platform is more of a speed bump than a roadblock. With VPNs, fake IDs, and alternate accounts already standard fare for navigating internet restrictions, how effective can this law really be?

The Hasty Debate

Critics on both sides of Parliament flagged concerns about the speed with which this legislation moved forward. But the Albanese government pressed ahead, arguing that urgent action was needed to protect young people. Their opponents in the Liberal-National coalition, not wanting to appear soft on tech regulation, fell in line. The result? A law that feels more like a political statement than a well-thought-out policy.

There’s no denying the appeal of bold action on Big Tech. Headlines about online predators and harmful content make it easy to rally public support. But there’s a fine line between decisive governance and reactionary policymaking.

Big Questions, Few Answers

The most glaring issue is privacy. Forcing users to hand over ID to access social media opens up a Pandora’s box of security concerns. Centralizing sensitive personal data creates a tempting target for hackers, and Australia’s track record with large-scale data breaches isn’t exactly reassuring.

There’s also the question of what happens when kids inevitably find workarounds. Locking them out of mainstream platforms doesn’t mean they’ll stop using the internet—it just pushes them into less regulated, potentially more harmful digital spaces. Is that really a win for online safety?

A Global Watch Party

Australia’s bold move is already drawing attention from abroad. Governments worldwide are grappling with how to regulate social media, and this legislation could set a precedent. But whether it becomes a model for others or a cautionary tale remains to be seen.

For now, the Albanese government has delivered a strong message: protecting children online is a priority. But the lack of clear answers about enforcement and privacy leaves the impression that this is a solution in search of a strategy.

All on the Platforms

Under the new social media law, the responsibility for enforcement doesn’t rest with the government, but with the very companies it targets. Platforms like Facebook, TikTok, and Instagram will be tasked with ensuring no Australian under 16 manages to slip through the digital gates. If they fail?

They’ll face fines of up to A$50 million (about $32.4 million USD). That’s a steep price for failing to solve a problem the government itself hasn’t figured out how to address.

The legislation offers little in the way of specifics, leaving tech giants to essentially guess how they’re supposed to pull off this feat. The law vaguely mentions taking “reasonable steps” to verify age but skips the critical part: defining what “reasonable” means.

The Industry Pushback

Tech companies, predictably, are not thrilled. Meta, in its submission to a Senate inquiry, called the law “rushed” and out of touch with the current limitations of age-verification technology. “The social media ban overlooks the practical reality of age assurance technology,” Meta argued. Translation? The tools to make this work either don’t exist or aren’t reliable enough to enforce at scale.

X didn’t hold back either. The platform warned of potential misuse of the sweeping powers the legislation grants to the minister for communications. X CEO Linda Yaccarino’s team even raised concerns that these powers could be used to curb free speech — another way of saying that regulating who gets to log on could quickly evolve into regulating what they’re allowed to say.

And it’s not just the tech companies pushing back. The Human Rights Law Centre questioned the lawfulness of the bill, highlighting how it opens the door to intrusive data collection while offering no safeguards against abuse.

Promises, Assurances, and Ambiguities

The government insists it won’t force people to hand over passports, licenses, or tap into the contentious new digital ID system to prove their age. But here’s the catch: there’s nothing in the current law explicitly preventing that, either. The government is effectively asking Australians to trust that these measures won’t lead to broader surveillance—even as the legislation creates the infrastructure to make it possible.

This uncertainty was laid bare during the bill’s rushed four-hour review. Liberal National Senator Matt Canavan pressed for clarity, and while the Coalition managed to extract a promise for amendments preventing platforms from demanding IDs outright, it still feels like a band-aid on an otherwise sprawling mess.

A Law in Search of a Strategy

Part of the problem is that the government itself doesn’t seem entirely sure how this law will work. A trial of age-assurance technology is planned for mid-2025—long after the law is expected to take effect. The communications minister, Michelle Rowland, will ultimately decide what enforcement methods apply to which platforms, wielding what critics describe as “expansive” and potentially unchecked authority.

It’s a power dynamic that brings to mind a comment from Rowland’s predecessor, Stephen Conroy, who once bragged about his ability to make telecommunications companies “wear red underpants on [their] head” if he so desired. Tech companies now face the unenviable task of interpreting a vague law while bracing for whatever decisions the minister might make in the future.

The list of platforms affected by the law is another moving target. Government officials have dropped hints in interviews—YouTube, for example, might not make the cut—but these decisions will ultimately be left to the minister. This pick-and-choose approach adds another layer of uncertainty, leaving tech companies and users alike guessing at what’s coming next.

The Bigger Picture

The debate around this legislation is as much about philosophy as it is about enforcement. On one hand, the government is trying to address legitimate concerns about children’s safety online. On the other, it’s doing so in a way that raises serious questions about privacy, free speech, and the limits of state power over the digital realm.

Australia’s experiment could become a model for other countries grappling with the same challenges—or a cautionary tale of what happens when governments legislate without a clear plan. For now, the only certainty is uncertainty. In a year’s time, Australians might find themselves proving their age every time they try to log in—or watching the system collapse under the weight of its own contradictions.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Business

Meta Pushes for a Digital ID Revolution

Published on

 By

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Meta is coming out as a supporter of age verification, and the proposal the giant is putting forward exposes and sums up many of the points critics have been consistently making.

A blog post by Meta VP and Global Head of Safety Antigone Davis proposes to implement age verification at the operating system/app stores level.

Although the narrative around child safety and difficulties of parenting “in the digital age” dominates the article, “the meat of it” are the implications that this approach brings with it: namely, it creates a situation where, down the line, people would be forced to link real-world identity to their phone’s operating system (OS).

And everything they do using the phone is exposed to that OS.

Davis goes into how the EU (notably via the Digital Services Act) is trying to resolve the problem of age verification, but doesn’t think any existing methods are good enough; instead, new regulation is needed, the Meta exec argues – and that would be the one that “applies to all apps.”

It means incorporating “the point of approval” into the OS or app stores. The parents would be notified when their child downloads an app, which would allow them to approve it. (The idea seems to be that if a jurisdiction has laws that prohibit a certain category of minors from using certain apps – it would be the parents’ job to “enforce” that.)

It might not sound like a very reliable way to ensure compliance, but it would achieve some goals, in the grand scheme of things, quite separate from what the “think of the children” argument seeks to present as the reason for the age verification push.

Meta is trying to lead the way here in introducing “industry standards” – the proposal looks to embed the technology into different operating systems and app stores.

When it comes to what a social media company should consider age-appropriate content, Meta is again urging common “standards” that would be observed by everyone.

And, some countries already require that parents provide government-issued ID to app stores in order for their children to use a phone and set up accounts. Meta wants the EU to “mandate by a legislative framework that applies across all member states and for all apps teens use.”

You subscribe to Reclaim The Net because you value free speech and privacy. Each issue we publish is a commitment to defend these critical rights, providing insights and actionable information to protect and promote liberty in the digital age.

Despite our wide readership, less than 0.2% of our readers contribute financially. With your support, we can do more than just continue; we can amplify voices that are often suppressed and spread the word about the urgent issues of censorship and surveillance.

Consider making a modest donation — just $5, or whatever amount you can afford. Your contribution will empower us to reach more people, educate them about these pressing issues, and engage them in our collective cause.

Thank you for considering a contribution. Each donation not only supports our operations but also strengthens our efforts to challenge injustices and advocate for those who cannot speak out.


Thank you.
Continue Reading

Business

Australia passes social media ban for kids under 16 sparking online surveillance concerns

Published on

From LifeSiteNews

By Andreas Wailzer

While the official goal of the bill is to protect the mental health of children and adolescents, critics have raised concerns that the bill would establish an online surveillance system for all Australians, similar to Communist China.

Australia has passed a social media ban for children under the age of 16, a seemingly prudent move but one that has raised serious concerns about online surveillance.

On Thursday, November 28, the Australian Senate passed the bill with a 34-19 vote, making it the world’s first social media ban for under-16-year-olds.

The “Online Safety Amendment Bill 2024” threatens social media companies with up to 50 million AUD (32 million USD) if they fail to comply with the requirement of verifying the age of their users.

While the official goal of the bill is to protect the mental health of children and adolescents, critics have raised concerns that the bill would establish an online surveillance system for all Australians, similar to Communist China.

“Seems like a backdoor way to control access to the Internet by all Australians,” Elon Musk wrote on X.

Journalist and free speech advocate Michael Shellenberger said that “this bill is a Trojan horse to create digital IDs, which is a giant leap into the totalitarian dystopia depicted in ‘Black Mirror,’ and already in place in China.”

The bill, which was rushed through parliament, does not give any details about how age verification will work and will not come into force until the end of next year. On November 26, the Australian Senate’s Environment and Communications Legislation Committee approved the bill under the condition that social media platforms must not force their users to give them their personal data, including information from government-issued IDs.

While this provision appears to rule out the use of Digital IDs for now, the question of how it will be enforced remains. The Guardian reports that supporters of the bill have said that platforms may use biometric methods, such as facial scans, to verify the age of its users. This would, of course, mean that social media companies would collect the biometric data of all its users in Australia.

The explanatory memorandum to the bill says that there will be “robust” privacy protections, “including prohibiting platforms from using information collected for age assurance purposes for any other purpose unless explicitly agreed to by the individual.”

However, the memorandum also explains that “compliance with the minimum age obligation” will likely require platforms “to implement systems and procedures to monitor and respond to age-restricted users circumventing age assurance.”

This suggests that social media companies could continually monitor a user while using the platform, for instance, by repeatedly doing face scans to ensure that the user is still the same and at least 16 years old.

The vaguely worded bill also does not specify which companies will be affected by the age restriction. Communications minister Michelle Rowland said that TikTok, Instagram, X, Reddit, Facebook, and Snapchat will likely be included, while YouTube will be excluded due to its educational purposes.

In addition to the under-16 social media ban requiring age verification of users, the Australian government also sought to curb speech online via a draconian “Misinformation and Disinformation Bill.” However, the government had to abandon the controversial bill after facing significant cross-party opposition in the Senate. The bill would have forced social media companies to remove information that was “reasonably verifiable as false” or if “misinformation and disinformation” could cause serious harm. The vague definitions of these terms would have allowed social media companies or the government to arbitrarily censor content it deemed unwanted.

Continue Reading

Trending

X