Connect with us
[bsa_pro_ad_space id=12]

Digital ID

The End of Online Anonymity? Australia’s New Law Pushes Digital ID for Everyone To Ban Kids From Social Media

Published

11 minute read

 By

Australia is gearing up to roll out some of the world’s strictest social media rules, with Parliament having pushed through legislation to bar anyone under 16 from creating accounts on platforms like Facebook, Instagram, Snapchat, and TikTok. It’s a sweeping measure but, as the ink dries, the questions are piling up.

Prime Minister Anthony Albanese’s Labor government and the opposition teamed up on Thursday to pass the new restrictions with bipartisan enthusiasm. And why not? Opinion polls show a whopping 77% of Australians are behind the idea. Protecting kids online is an easy sell which is why it’s often used to usher in the most draconian of laws. Still, the devil—as always—is in the details.

Proof of Age, But at What Cost?

Here’s the crux of the new law: to use social media, Australians will need to prove they’re old enough. That means showing ID, effectively ending the anonymity that’s long been a feature (or flaw, depending on your perspective) of the online experience. In theory, this makes sense—keeping kids out of online spaces designed for adults is hardly controversial. But in practice, it’s like using a sledgehammer to crack a walnut.

For one, there’s no clear blueprint for how this will work. Will social media platforms require passports and birth certificates at sign-up? Who’s going to handle and secure this flood of personal information? The government hasn’t offered much clarity and, until it does, the logistics look shaky.

And then there’s the matter of enforcement. Teenagers are famously tech-savvy, and history has shown that banning them from a platform is more of a speed bump than a roadblock. With VPNs, fake IDs, and alternate accounts already standard fare for navigating internet restrictions, how effective can this law really be?

The Hasty Debate

Critics on both sides of Parliament flagged concerns about the speed with which this legislation moved forward. But the Albanese government pressed ahead, arguing that urgent action was needed to protect young people. Their opponents in the Liberal-National coalition, not wanting to appear soft on tech regulation, fell in line. The result? A law that feels more like a political statement than a well-thought-out policy.

There’s no denying the appeal of bold action on Big Tech. Headlines about online predators and harmful content make it easy to rally public support. But there’s a fine line between decisive governance and reactionary policymaking.

Big Questions, Few Answers

The most glaring issue is privacy. Forcing users to hand over ID to access social media opens up a Pandora’s box of security concerns. Centralizing sensitive personal data creates a tempting target for hackers, and Australia’s track record with large-scale data breaches isn’t exactly reassuring.

There’s also the question of what happens when kids inevitably find workarounds. Locking them out of mainstream platforms doesn’t mean they’ll stop using the internet—it just pushes them into less regulated, potentially more harmful digital spaces. Is that really a win for online safety?

A Global Watch Party

Australia’s bold move is already drawing attention from abroad. Governments worldwide are grappling with how to regulate social media, and this legislation could set a precedent. But whether it becomes a model for others or a cautionary tale remains to be seen.

For now, the Albanese government has delivered a strong message: protecting children online is a priority. But the lack of clear answers about enforcement and privacy leaves the impression that this is a solution in search of a strategy.

All on the Platforms

Under the new social media law, the responsibility for enforcement doesn’t rest with the government, but with the very companies it targets. Platforms like Facebook, TikTok, and Instagram will be tasked with ensuring no Australian under 16 manages to slip through the digital gates. If they fail?

They’ll face fines of up to A$50 million (about $32.4 million USD). That’s a steep price for failing to solve a problem the government itself hasn’t figured out how to address.

The legislation offers little in the way of specifics, leaving tech giants to essentially guess how they’re supposed to pull off this feat. The law vaguely mentions taking “reasonable steps” to verify age but skips the critical part: defining what “reasonable” means.

The Industry Pushback

Tech companies, predictably, are not thrilled. Meta, in its submission to a Senate inquiry, called the law “rushed” and out of touch with the current limitations of age-verification technology. “The social media ban overlooks the practical reality of age assurance technology,” Meta argued. Translation? The tools to make this work either don’t exist or aren’t reliable enough to enforce at scale.

X didn’t hold back either. The platform warned of potential misuse of the sweeping powers the legislation grants to the minister for communications. X CEO Linda Yaccarino’s team even raised concerns that these powers could be used to curb free speech — another way of saying that regulating who gets to log on could quickly evolve into regulating what they’re allowed to say.

And it’s not just the tech companies pushing back. The Human Rights Law Centre questioned the lawfulness of the bill, highlighting how it opens the door to intrusive data collection while offering no safeguards against abuse.

Promises, Assurances, and Ambiguities

The government insists it won’t force people to hand over passports, licenses, or tap into the contentious new digital ID system to prove their age. But here’s the catch: there’s nothing in the current law explicitly preventing that, either. The government is effectively asking Australians to trust that these measures won’t lead to broader surveillance—even as the legislation creates the infrastructure to make it possible.

This uncertainty was laid bare during the bill’s rushed four-hour review. Liberal National Senator Matt Canavan pressed for clarity, and while the Coalition managed to extract a promise for amendments preventing platforms from demanding IDs outright, it still feels like a band-aid on an otherwise sprawling mess.

A Law in Search of a Strategy

Part of the problem is that the government itself doesn’t seem entirely sure how this law will work. A trial of age-assurance technology is planned for mid-2025—long after the law is expected to take effect. The communications minister, Michelle Rowland, will ultimately decide what enforcement methods apply to which platforms, wielding what critics describe as “expansive” and potentially unchecked authority.

It’s a power dynamic that brings to mind a comment from Rowland’s predecessor, Stephen Conroy, who once bragged about his ability to make telecommunications companies “wear red underpants on [their] head” if he so desired. Tech companies now face the unenviable task of interpreting a vague law while bracing for whatever decisions the minister might make in the future.

The list of platforms affected by the law is another moving target. Government officials have dropped hints in interviews—YouTube, for example, might not make the cut—but these decisions will ultimately be left to the minister. This pick-and-choose approach adds another layer of uncertainty, leaving tech companies and users alike guessing at what’s coming next.

The Bigger Picture

The debate around this legislation is as much about philosophy as it is about enforcement. On one hand, the government is trying to address legitimate concerns about children’s safety online. On the other, it’s doing so in a way that raises serious questions about privacy, free speech, and the limits of state power over the digital realm.

Australia’s experiment could become a model for other countries grappling with the same challenges—or a cautionary tale of what happens when governments legislate without a clear plan. For now, the only certainty is uncertainty. In a year’s time, Australians might find themselves proving their age every time they try to log in—or watching the system collapse under the weight of its own contradictions.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Business

US Expands Biometric Technology in Airports Despite Privacy Concerns

Published on

 

 

By

Biometric systems promise efficiency at airports, but concerns over data security and transparency persist.

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

Biometric technology is being rolled out at US airports at an unprecedented pace, with plans to extend these systems to hundreds more locations in the coming years. The Transportation Security Administration (TSA) is driving a significant push toward facial recognition and other biometric tools, claiming improved efficiency and security. However, the expansion has sparked growing concerns, with privacy advocates and lawmakers voicing concerns about data security, transparency, and the potential for misuse of such technology.

US Customs and Border Protection (CBP) has already implemented its Biometric Facial Comparison system at 238 airports, including 14 international locations. This includes all CBP Preclearance sites and several major departure hubs. CBP says its Biometric Exit program is rapidly gaining traction, with new airport partners joining monthly and positive feedback reported from passengers.

Meanwhile, the TSA has equipped nearly 84 airports with its next-generation Credential Authentication Technology (CAT-2) scanners, which incorporate facial recognition. This rollout is part of a broader effort to bring biometrics to over 400 airports nationwide. These advancements are detailed in a TSA fact sheet aimed at building public awareness of the initiative.

Opposition and Privacy Concerns

Despite assurances from TSA and CBP, critics remain skeptical. Some lawmakers, led by Senator Jeff Merkley, argue that the TSA has yet to justify the need for biometric systems when previous technologies already authenticated IDs effectively. Privacy advocates warn that the widespread use of facial recognition could set a dangerous precedent, normalizing surveillance and threatening individual freedoms.

The debate is closely tied to the federal REAL ID Act, introduced two decades ago to standardize identification requirements for air travel. As of now, many states have failed to fully implement REAL ID standards, and only a portion of Americans have acquired compliant credentials. Reports indicate that fewer than half of Ohio residents and just 32 percent of Kentuckians have updated their IDs, even as the May 7, 2025, deadline approaches.

Biometric Adoption on the Global Stage

Beyond the US, biometric systems are gaining momentum worldwide. India’s Digi Yatra program has attracted 9 million active users, adding 30,000 new downloads daily. The program processes millions of flights while emphasizing privacy by storing data on users’ mobile devices rather than centralized databases. Plans are underway to expand the program further, including international pilots scheduled for mid-2025.

While biometric technology offers alleged benefits, such as faster boarding and enhanced security, it also poses serious risks. Privacy advocates caution against unchecked implementation, especially since, one day, this form of check-in is likely to be mandatory.

The TSA’s aggressive push for biometrics places the United States at the forefront of this global shift.

Continue Reading

Digital ID

Age of online privacy coming to an end as Australia adopts digital ID

Published on

 

 

By 

 

Australia’s eSafety Commissioner Defends Controversial Online Age Verification Digital ID Methods

Julie Inman Grant, Australia’s eSafety Commissioner (to her critics – the country’s chief censor), has attempted to explain how Online Safety Amendment – Social Media Minimum Age Bill 2024 – will be enforced.

The bill mandates online age verification, and bans minors under 16 from using social platforms, in what is described as “the strictest crackdown” yet in the world – with many in the world, no doubt, looking at how things pan out in Australia before they make their own restrictive moves.

The “small” question that remains to be answered Down Under now is – how does the government propose to determine the age of a person using an online platform, before the government orders them to be banned?

Grant may be trying to sell one method as less invasive, less potentially harmful, and otherwise controversial than another – but they appear to be as bad as each other, only in different ways.

“There are really only three ways you can verify someone’s age online, and that’s through ID, through behavioral signals, or through biometrics,” she told NPR.

The “ID” route means that every internet user would have to provide government-issued documents to platforms, revealing their real-world identity to these platforms and anyone else they’re in business with (such as governments and data brokers) and ending online anonymity for everyone.

And that, in fact, is the only sure-fire way to determine someone’s age. The other two produce estimates. The biometrics Grant mentions refer to uploading selfies to companies like Yoti, who then guess a user’s age.

Related: The 2024 Digital ID and Online Age Verification Agenda

Better than the “ID” method – that is, if you believe it’s a good idea for minors, or anyone, to just hand over biometric data to third parties.

Then, there are “behavioral signals” – and it sounds positively bonkers that a government would entertain the idea of deploying such technology on/against its citizens.

Grant said she met with yet another third party in the US – “an age assurance provider” – this unnamed company doesn’t monitor and analyze your facial features, but hand gestures. For age verification.

Like so: “Say you do a peace sign then a fist to the camera. It follows your hand movements. And medical research has shown that based on your hand movement, it can identify your age.”

One way to look at all this is that tech is being developed to step up online surveillance, while a flurry of “think of the children” laws may be here to legitimize and “legalize” that tech’s use.

Continue Reading

Trending

X