Connect with us
[bsa_pro_ad_space id=12]

Censorship Industrial Complex

New WEF report suggests leveraging ESG scoring to enforce globalist ideas on online platforms

Published

9 minute read

From LifeSiteNews

By Tim Hinchliffe

Unelected globalists like those at the World Economic Forum are attempting to associate ‘disinformation’ and ‘hate speech’ with human rights abuses to empower themselves and silence dissent online.

The World Economic Forum (WEF) says that environmental, social, and governance metrics (ESG) can prove valuable for evaluating platforms on their handling of disinformation, hate speech, and abuse material, in a new report.

Published on June 6, 2024, the WEF white paper, “Making a Difference: How to Measure Digital Safety Effectively to Reduce Risks Online,” says that, “In an increasingly interconnected world, it is essential to measure digital safety in order to understand risks, allocate resources and demonstrate compliance with regulations.”

If measuring digital safety is considered to be essential, what then are the actual online harms that would necessitate measuring digital safety?

The latest white paper only gives three examples: disinformation, hate speech, and abuse material – as if they were all equal under the banner of online harm.

“ESG metrics present another valuable perspective for evaluating online safety” — How to Measure Digital Safety Effectively to Reduce Risks Online, WEF, June 2024

One method for evaluating online safety described in the latest WEF white paper is to leverage ESG scoring, which is basically a social credit for companies to make them fall in line with unelected globalist ideologies, even when these ESG policies are detrimental to their bottom line.

“Within ESG investing, companies are assessed based on their environmental impact, social responsibility and corporate governance practices,” the report reads.

Similarly, online platforms could be evaluated based on their efforts to promote a safe and inclusive online environment, and the transparency of content moderation policies.

Online platforms can also be evaluated based on their processes, tools and rules designed to promote the ‘safe use’ of their services in a manner that mitigates harm to vulnerable non-user groups.

And who will be evaluating online platforms in this Orwellian dystopia? Why, the unelected globalists themselves, of course!

Best to leave these decisions and all the power to bureaucrats that have our best interests at heart for the greater, collectivist good.

“An increase in the speed of content removals may reflect proactive moderation efforts, but it could also hint at overzealous censorship that stifles free expression” — How to Measure Digital Safety Effectively to Reduce Risks Online, WEF, June 2024

The WEF considers disinformation, hate speech, and abuse material as all being online harms that need to be measured and rectified.

But why do they lump everything together under this vague, blanket term of digital safety?

It is so that unelected globalist NGOs like the WEF can have more power and influence over government regulators concerning what type of information people are allowed to access through their service providers.

According to the report:

Digital safety metrics reinforce accountability, empowering NGOs and regulators to oversee service providers effectively.

They also serve as benchmarks for compliance monitoring, enhancing user trust in platforms, provided they are balanced with privacy considerations and take into account differentiation among services.

For the unelected globalist bureaucrats, measuring digital safety is about empowering themselves and forcing people into compliance with unelected globalist ideologies (with the help of regulators), all while balancing privacy considerations that are antithetical to everything they’re trying to achieve with the great reset and the fourth industrial revolution.

WEF founder Klaus Schwab has stated on numerous occasions that the so-called fourth industrial revolution will lead to the fusion of our physical, biological, and digital identities.

Schwab openly talks about a future where we will decode people’s brain activity to know how they’re feeling and what they are thinking and that people’s digital avatars will live on after death and their brains will be replicated using artificial intelligence.

How’s that for balancing privacy considerations in the digital world?

“Digital safety decisions must be rooted in international human rights frameworks” — Typology of Online Harms, WEF, August 2023

While the latest WEF white paper only lists disinformation, hate speech, and abuse material, it builds upon an August 2023 insight report entitled “Toolkit for Digital Safety Design Interventions and Innovations: Typology of Online Harms,” which expands the scope of what constitutes online harm into various categories:

  • Threats to personal and community safety,
  • Harm to health and well-being,
  • Hate and discrimination,
  • Violation of dignity,
  • Invasion of privacy,
  • Deception and manipulation.

Many of the harms listed in last year’s report have to do with heinous acts against people of all ages and identities, but there too in that list of online harms, the WEF highlights misinformation and disinformation without giving a single, solitary example of either one.

With misinformation and disinformation, the typology report states that “[b]oth can be used to manipulate public opinion, interfere with democratic processes such as elections or cause harm to individuals, particularly when it involves misleading health information.”

In the same report, the unelected globalists admit that it’s almost impossible “to define or categorize common types of harm.”

The authors say that “there are regional differences in how specific harms are defined in different jurisdictions and that there is no international consensus on how to define or categorize common types of harm.

“Considering the contextual nature of online harm, the typology does not aim to offer precise definitions that are universally applicable in all contexts.”

By not offering precise definitions, they are deliberately making “online harm” a vague concept that can be left wide open to just about any interpretation, which makes quashing dissent and obfuscating the truth even easier because these “online harms,” in their eyes, must be seen as human rights abuses:

By framing online harms through a human rights lens, this typology emphasizes the impacts on individual users and aims to provide a broad categorization of harms to support global policy development

Once again, the authors are deliberately putting misinformation, disinformation, and so-called hate speech in the same category as abuse, harassment, doxing, and criminal acts of violence under this “broad categorization of harms.”

That way, they can treat anyone they deem as a threat for speaking truth to power in the same manner as they would for people who commit the most egregious crimes known to humanity.

The title of the latest white paper suggests that it’s all about measuring digital safety, but the title can be misleading.

It’s like what lawmakers do when they introduce bills like the Inflation Reduction Act, which had nothing to do with reducing inflation and everything to do with advancing the green agenda, decarbonization, and net-zero policies.

Similarly, the WEF’s latest white paper may have little or nothing to do with reducing risks online, as the title suggests.

But it does have a lot to do with making sure that misinformation, disinformation, and hate speech are associated with human rights abuses and other acts of real criminality.

In doing so, the ESG proponents can swoop in and consolidate more power for their public-private partnerships – the fusion of corporation and state.

Reprinted with permission from The Sociable.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Censorship Industrial Complex

UNESCO launches course aimed at ‘training’ social media influencers to ‘report hate speech’

Published on

From LifeSiteNews

By Tim Hinchliffe

UNESCO’s bills its new ‘training’ initiative as empowering participants to be more credible and resilient while simply turning independent content creators into talking heads for the establishment.

UNESCO and the Knight Center for Journalism launch training courses, e-books, and surveys on disinformation and hate speech for influencers and content creators, big and small.

Last month, UNESCO published the results of a survey called “Behind the Screens: Insights from Digital Content Creators” that concluded that among 500 content creators in 45 countries that had a minimum of 1,000 followers, 62 percent said they did “not carry out rigorous and systematic fact-checking of information prior to sharing it,” while 73 percent expressed “the wish to be trained to do so.”

And lo and behold! UNESCO and the Knight Center for Journalism in the Americas have launched a re-education course to brainwash independent creators into thinking like unelected globalists and the legacy media, whose credibility are at an all-time low:

The journalism industry is on high alert as news audiences continue to migrate away from legacy media to social media, and many young people place more trust in TikTokers than journalists working at storied news outlets

“Respondents to the survey expressed interest in taking UNESCO’s free online course designed to equip participants with media and information literacy skills and knowledge,” the report states.

To get an idea of the make-up of those 500 content creators that were surveyed in the UNESCO study:

  • 68 percent were nano-influencers – those with 1,000 to 10,000 followers
  • 25 percent were micro-influencers – those with 10,000 to 100,000 followers
  • 4 percent were macro-influencers – those with 100,000 to 1,000,000 followers
  • 6 percent were mega-influencers – those with over 1,000,000 followers

Only 12.2 percent of the 500 people surveyed produced content under the category of “current affairs/politics and economy” while the majority covered “fashion/lifestyle” (39.3 percent), “beauty” (34 percent), “travel and food” (30 percent), and “gaming” (29 percent).

Equip yourself to combat online misinformation, disinformation, hate speech, and harmful AI content. Collaborate with fellow journalists and content creators to promote transparency and accountability on digital platforms, empowering your audience with the media and information literacy skills they need to navigate today’s information landscape.

In addition to the survey and the online course called Digital Content Creators and Journalists: How to Be a Trusted Voice Online,” UNESCO and the Knight Center also published an e-book in October called “Content Creators and Journalists: Redefining News and Credibility in the Digital Age.”

This pyramid of propaganda is billed as empowering influencers to be more credible and resilient, but these efforts are also aimed at turning independent content creators into talking heads for the establishment.

 

Despite their expanding outreach, many digital content creators who work independently face significant challenges including the lack of institutional support, guidance, and recognition. — UNESCO, Behind the Screens: Insights from Digital Content Creators, November 2024

How can an independent content creator remain independent if he or she needs institutional support, guidance, and recognition?

This is an attempt by the United Nations to take independence away from the equation, so that its messaging becomes indistinguishable from mainstream, establishment narratives.

And between the survey and the e-book, there is not one, single, solitary example of disinformation or hate speech – save perhaps the claim that denying official climate change narratives is considered disinformation, but that’s highly debatable.

Threats to collective climate action are often perpetuated not only by individual creators but by industries, like fossil fuels, that actively shape public discourse to their advantage.

Speaking of climate change, the e-book contains a lengthy chapter called “Content Creators and Climate Change” that is entirely dedicated to pushing climate activism while claiming climate change disinformation is often perpetuated by coordinated campaigns from fossil fuel industries.

The UNESCO documents place heavy emphasis on disclosing who’s funding content creators while ignoring its partner, the Chinese Communist Party’s (CCP), and its alleged influence over UNESCO:

The Chinese Communist Party uses UNESCO to “rewrite history” and to “legitimize the party’s rule over regions with large ethnic minorities.”

When held to a mirror, UNESCO comes off as little more than hypocritical with massive conflicts of interests of its own:

One of the biggest ethical questions is knowing from where content creators derive their income.

 

At the same time, UNESCO points readers towards organizations like factcheck.org, which itself is funded by the likes of the U.S. State Department and the Robert Woods Johnson Foundation, the latter of which holds approximately $2 billion of stock in COVID vaccine manufacturer J&J, according to U.S. Rep. Thomas Massie.

In January 2021, UNESCO, the WHO, UNDP, EU, and the Knight Center for Journalism in the Americas ran a similar type of propaganda campaign for so-called COVID vaccine disinformation training for journalists as they are now doing for so-called climate change disinformation for content creators.

Another goal of UNESCO and the Knight Center is to create an environment where content creators snitch on one another under the guise of “hate speech”:

Among those targeted by hate speech, most chose to ignore it (31.5%). Only one-fifth (20.4%) reported it to social media platforms. This indicates an area where UNESCO and its partners could provide valuable training for digital content creators on how to effectively address and report hate speech.

In other words, the U.N. is partnering with journalists to teach influencers how to become victims that need protection.

Hey! Content creators. Were you aware that any criticism against the propaganda that we’ve planted within you means that you were a victim of hate speech? No? Well, climb on board and let’s “effectively address and report hate speech!”

Reprinted with permission from The Sociable.

Continue Reading

Business

TikTok Battles Canada’s Crackdown, Pitching Itself as a “Misinformation” Censorship Ally

Published on

If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.

By

TikTok challenges Canada’s decision to shut down its operations, citing its role in combating “misinformation” as a reason the government should let it stay in the country.

In Canada, TikTok is attempting to get the authorities to reverse the decision to shut down its business operations by going to court – but also by recommending itself as a proven and reliable ally in combating “harmful content” and “misinformation.”

Canada last month moved to shut down TikTok’s operations, without banning the app itself. All this is happening ahead of federal elections amid the government’s efforts to control social media narratives, always citing fears of “misinformation” and “foreign interference” as the reasons.

TikTok, owned by China’s ByteDance, was accused of – via its parent company – representing “specific national security risks” when the decision regarding its corporate presence was made in November; no details have been made public regarding those alleged risks, however.

Now the TikTok Canada director of public policy and government affairs, Steve de Eyre, is telling the local press that the newly created circumstances are making it difficult for the company to work with election regulators and “civil society” to ensure election integrity – something Eyre said was previously successfully done.

In 2021, he noted, TikTok initiated collaboration with Elections Canada (the agency that organizes elections and has the power to flag social media content) which included TikTok adding links to all election-related videos that directed users toward “verified information.”

And the following year, TikTok was invested in monitoring its platform for “potentially violent” content, during the Freedom Convoy protests against Covid mandates.

More recently, TikTok was also on its toes for “foreign interference and hateful content” related to Brampton clashes between Sikhs and Hindus.

This approach, Eyre argues, is now jeopardized because TikTok employees are not present in Canada, who would be able to inform the platform’s decisions in terms of the political and cultural “context” in Canada.

And the political context is that of the Trudeau government playing the election misinformation card indirectly and directly, to put pressure on social sites.

Even though the decision regarding the company’s business operations has been described by Foreign Minister Melanie Joly as “a message to China” – it’s really a message to TikTok, since the app remains available, but has been “put on notice.”

You subscribe to Reclaim The Net because you value free speech and privacy. Each issue we publish is a commitment to defend these critical rights, providing insights and actionable information to protect and promote liberty in the digital age.

Despite our wide readership, less than 0.2% of our readers contribute financially. With your support, we can do more than just continue; we can amplify voices that are often suppressed and spread the word about the urgent issues of censorship and surveillance.

Consider making a modest donation — just $5, or whatever amount you can afford. Your contribution will empower us to reach more people, educate them about these pressing issues, and engage them in our collective cause.

Thank you for considering a contribution. Each donation not only supports our operations but also strengthens our efforts to challenge injustices and advocate for those who cannot speak out.


Thank you.
Continue Reading

Trending

X