Connect with us

Censorship Industrial Complex

Internet censorship laws lead a majority of Canadians to believe free speech is threatened: poll

Published

5 minute read

From LifeSiteNews

By Anthony Murdoch

In light of the barrage of new internet censorship laws being passed or brought forth by the federal government of Prime Minister Justin Trudeau, a new survey revealed that the majority of Canadians feel their freedom of speech is under attack.

According to results from a Leger survey conducted April 26-28 that sampled responses from 1,610 Canadians, 57 percent think their freedom of speech is being threatened, with 36 percent not believing this to be true.

Not surprisingly, those with conservative voting intentions, about 76 percent, were the most likely to feel that their free speech is under attack, with 70 percent of the same group as well as those over 55, feeling that Canada is not as free as before.

The survey results also show that 62 percent of Canadians think it is “tougher to voice their opinion in their country, while 27% think it is easier.”

“Conservative voters (70%) and Canadians aged 55 or older (70%) are more likely to think that it is tougher now to express their opinion,” Leger noted in its survey.

Not surprisingly, Liberal voters were the most supportive of placing limits on free speech, with 64 percent agreeing with the following: “There should be limits on freedom of speech to ensure that things such as hate speech, speeches preaching a form of intolerance, or speeches against democracy be prevented from reaching the public.”

The survey also revealed that about one of four conservative voters believe that their views are not socially acceptable.

Sixty percent of conservative voters said that free speech should never be limited in any manner and that one should be able to express their opinions publicly without issue.

Regarding their reasons for free speech being under attack, 11 percent blamed politicians causing more hate, with eight percent saying “right-wing” extremists were to blame, with seven percent blaming woke-minded thinking as the issue. Twenty-nine percent of Canadians felt that a growing lack of respect is to blame, and 13 percent thought it is due to “a degradation of the moral fibre in the country.”

When it comes to internet censorship laws, the most recent one introduced in the House of Commons is a federal government bill that could lead to large fines or jail time for vaguely defined online “hate speech” infractions under Liberal Minster Attorney General Arif Virani’s Bill C-63, or Online Harms Act.

LifeSiteNews recently reported how well-known Canadian psychologist Jordan Peterson and Queen’s University law professor Bruce Pardy blasted Trudeau and his government over Bill C-63.

Peterson noted that in his view, Bill C-63 is “designed … to produce a more general regime for online policing.”

“To me, that’s what it looks like,” he said.

Two other Trudeau bills dealing with freedom on the internet have become law, the first being Bill C-11 or the Online Streaming Act that mandates Canada’s broadcast regulator, the Canadian Radio-television and Telecommunications Commission (CRTC), oversee regulating online content on platforms such as YouTube and Netflix to ensure that such platforms are promoting content in accordance with a variety of its guidelines.

Trudeau’s other internet censorship law, the Online News Act, was passed by the Senate in June 2023.

The law mandates that Big Tech companies pay to publish Canadian content on their platforms. As a result, Meta, the parent company of Facebook and Instagram, blocked all access to news content in Canada. Google has promised to do the same rather than pay the fees laid out in the new legislation.

Critics of recent laws such as tech mogul Elon Musk have said it shows “Trudeau is trying to crush free speech in Canada.”

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Business

EU Tightens Social Media Censorship Screw With Upcoming Mandatory “Disinformation” Rules

Published on

From Reclaim The Net

By

This refers not only to spreading “fact-checking” across the EU member-countries but also to making VLOPs finance these groups. This, is despite the fact many of the most prominent “fact-checkers” have been consistently accused of fostering censorship instead of checking content for accuracy in an unbiased manner.

What started out as the EU’s “voluntary code of practice” concerning “disinformation” – affecting tech/social media companies – is now set to turn into a mandatory code of conduct for the most influential and widely-used ones.

The news was revealed by the Irish media regulator, specifically an official of its digital services, Paul Gordon, who spoke to journalists in Brussels. The EU Commission has yet to confirm that January will be the date when the current code will be “formalized” in this way.

The legislation that would enable the “transition” is the controversial Digital Services Act (DSA), which critics often refer to as the “EU online censorship law,” the enforcement of which started in February of this year.

The “voluntary” code is at this time signed by 44 tech companies, and should it become mandatory in January 2025, it will apply to those the EU defines as Very Large Online Platforms (VLOPs) (with at least 45 million monthly active users in the 27-nation bloc).

Currently, the number of such platforms is said to be 25.

In its present form, the DSA’s provisions obligate online platforms to carry out “disinformation”-related risk assessments and reveal what measures they are taking to mitigate any risks revealed by these assessments.

But when the code switches from “voluntary” to mandatory, these obligations will also include other requirements: demonetizing the dissemination of “disinformation”; platforms, civil society groups, and fact-checkers “effectively cooperating” during elections, once again to address “disinformation” – and, “empowering” fact-checkers.

This refers not only to spreading “fact-checking” across the EU member-countries but also to making VLOPs finance these groups. This, is despite the fact many of the most prominent “fact-checkers” have been consistently accused of fostering censorship instead of checking content for accuracy in an unbiased manner.

The code was first introduced (in its “voluntary” form) in 2022, with Google, Meta, and TikTok among the prominent signatories – while these rules originate from a “strengthened” EU Code of Practice on Disinformation based on the Commission’s Guidance issued in May 2021.

“It is for the signatories to decide which commitments they sign up to and it is their responsibility to ensure the effectiveness of their commitments’ implementation,” the EU said at the time – that would have been the “voluntary” element, while the Commission said the time it had not “endorsed” the code.

It appears the EC is now about to “endorse” the code, and then some – there are active preparations to make it mandatory.

Continue Reading

Brownstone Institute

They Are Scrubbing the Internet Right Now

Published on

From the Brownstone Institute

By Jeffrey A TuckerJeffrey A. TuckerDebbie Lerman  

For the first time in 30 years, we have gone a long swath of time – since October 8-10 – since this service has chronicled the life of the Internet in real time.

Instances of censorship are growing to the point of normalization. Despite ongoing litigation and more public attention, mainstream social media has been more ferocious in recent months than ever before. Podcasters know for sure what will be instantly deleted and debate among themselves over content in gray areas. Some like Brownstone have given up on YouTube in favor of Rumble, sacrificing vast audiences if only to see their content survive to see the light of day.

It’s not always about being censored or not. Today’s algorithms include a range of tools that affect searchability and findability. For example, the Joe Rogan interview with Donald Trump racked up an astonishing 34 million views before YouTube and Google tweaked their search engines to make it hard to discover, while even presiding over a technical malfunction that disabled viewing for many people. Faced with this, Rogan went to the platform X to post all three hours.

Navigating this thicket of censorship and quasi-censorship has become part of the business model of alternative media.

Those are just the headline cases. Beneath the headlines, there are technical events taking place that are fundamentally affecting the ability of any historian even to look back and tell what is happening. Incredibly, the service Archive.org which has been around since 1994 has stopped taking images of content on all platforms. For the first time in 30 years, we have gone a long swath of time – since October 8-10 – since this service has chronicled the life of the Internet in real time.

As of this writing, we have no way to verify content that has been posted for three weeks of October leading to the days of the most contentious and consequential election of our lifetimes. Crucially, this is not about partisanship or ideological discrimination. No websites on the Internet are being archived in ways that are available to users. In effect, the whole memory of our main information system is just a big black hole right now.

The trouble on Archive.org began on October 8, 2024, when the service was suddenly hit with a massive Denial of Service attack (DDOS) that not only took down the service but introduced a level of failure that nearly took it out completely. Working around the clock, Archive.org came back as a read-only service where it stands today. However, you can only read content that was posted before the attack. The service has yet to resume any public display of mirroring of any sites on the Internet.

In other words, the only source on the entire World Wide Web that mirrors content in real time has been disabled. For the first time since the invention of the web browser itself, researchers have been robbed of the ability to compare past with future content, an action that is a staple of researchers looking into government and corporate actions.

It was using this service, for example, that enabled Brownstone researchers to discover precisely what the CDC had said about Plexiglas, filtration systems, mail-in ballots, and rental moratoriums. That content was all later scrubbed off the live Internet, so accessing archive copies was the only way we could know and verify what was true. It was the same with the World Health Organization and its disparagement of natural immunity which was later changed. We were able to document the shifting definitions thanks only to this tool which is now disabled.

What this means is the following: Any website can post anything today and take it down tomorrow and leave no record of what they posted unless some user somewhere happened to take a screenshot. Even then there is no way to verify its authenticity. The standard approach to know who said what and when is now gone. That is to say that the whole Internet is already being censored in real time so that during these crucial weeks, when vast swaths of the public fully expect foul play, anyone in the information industry can get away with anything and not get caught.

We know what you are thinking. Surely this DDOS attack was not a coincidence. The time was just too perfect. And maybe that is right. We just do not know. Does Archive.org suspect something along those lines? Here is what they say:

Last week, along with a DDOS attack and exposure of patron email addresses and encrypted passwords, the Internet Archive’s website javascript was defaced, leading us to bring the site down to access and improve our security. The stored data of the Internet Archive is safe and we are working on resuming services safely. This new reality requires heightened attention to cyber security and we are responding. We apologize for the impact of these library services being unavailable.

Deep state? As with all these things, there is no way to know, but the effort to blast away the ability of the Internet to have a verified history fits neatly into the stakeholder model of information distribution that has clearly been prioritized on a global level. The Declaration of the Future of the Internet makes that very clear: the Internet should be “governed through the multi-stakeholder approach, whereby governments and relevant authorities partner with academics, civil society, the private sector, technical community and others.”  All of these stakeholders benefit from the ability to act online without leaving a trace.

To be sure, a librarian at Archive.org has written that “While the Wayback Machine has been in read-only mode, web crawling and archiving have continued. Those materials will be available via the Wayback Machine as services are secured.”

When? We do not know. Before the election? In five years? There might be some technical reasons but it might seem that if web crawling is continuing behind the scenes, as the note suggests, that too could be available in read-only mode now. It is not.

Disturbingly, this erasure of Internet memory is happening in more than one place. For many years,  Google offered a cached version of the link you were seeking just below the live version. They have plenty of server space to enable that now, but no: that service is now completely gone. In fact, the Google cache service officially ended just a week or two before the Archive.org crash, at the end of September 2024.

Thus the two available tools for searching cached pages on the Internet disappeared within weeks of each other and within weeks of the November 5th election.

Other disturbing trends are also turning Internet search results increasingly into AI-controlled lists of establishment-approved narratives. The web standard used to be for search result rankings to be governed by user behavior, links, citations, and so forth. These were more or less organic metrics, based on an aggregation of data indicating how useful a search result was to Internet users. Put very simply, the more people found a search result useful, the higher it would rank. Google now uses very different metrics to rank search results, including what it considers “trusted sources” and other opaque, subjective determinations.

Furthermore, the most widely used service that once ranked websites based on traffic is now gone. That service was called Alexa. The company that created it was independent. Then one day in 1999, it was bought by Amazon. That seemed encouraging because Amazon was well-heeled. The acquisition seemed to codify the tool that everyone was using as a kind of metric of status on the web. It was common back in the day to take note of an article somewhere on the web and then look it up on Alexa to see its reach. If it was important, one would take notice, but if it was not, no one particularly cared.

This is how an entire generation of web technicians functioned. The system worked as well as one could possibly expect.

Then, in 2014, years after acquiring the ranking service Alexa, Amazon did a strange thing. It released its home assistant (and surveillance device) with the same name. Suddenly, everyone had them in their homes and would find out anything by saying “Hey Alexa.” Something seemed strange about Amazon naming its new product after an unrelated business it had acquired years earlier. No doubt there was some confusion caused by the naming overlap.

Here’s what happened next. In 2022, Amazon actively took down the web ranking tool. It didn’t sell it. It didn’t raise the prices. It didn’t do anything with it. It suddenly made it go completely dark.

No one could figure out why. It was the industry standard, and suddenly it was gone. Not sold, just blasted away. No longer could anyone figure out the traffic-based website rankings of anything without paying very high prices for hard-to-use proprietary products.

All of these data points that might seem unrelated when considered individually, are actually part of a long trajectory that has shifted our information landscape into unrecognizable territory. The Covid events of 2020-2023, with massive global censorship and propaganda efforts, greatly accelerated these trends.

One wonders if anyone will remember what it was once like. The hacking and hobbling of Archive.org underscores the point: there will be no more memory.

As of this writing, fully three weeks of web content have not been archived. What we are missing and what has changed is anyone’s guess. And we have no idea when the service will come back. It is entirely possible that it will not come back, that the only real history to which we can take recourse will be pre-October 8, 2024, the date on which everything changed.

The Internet was founded to be free and democratic. It will require herculean efforts at this point to restore that vision, because something else is quickly replacing it.

Authors

Jeffrey A Tucker

Jeffrey Tucker is Founder, Author, and President at Brownstone Institute. He is also Senior Economics Columnist for Epoch Times, author of 10 books, including Life After Lockdown, and many thousands of articles in the scholarly and popular press. He speaks widely on topics of economics, technology, social philosophy, and culture.

Continue Reading

Trending

X