Connect with us
[bsa_pro_ad_space id=12]

Censorship Industrial Complex

World Economic Forum lists ‘disinformation’ and ‘climate change’ as most severe threats in 2024

Published

3 minute read

From LifeSiteNews

By Joe Kovacs

The World Economic Forum’s Global Risks Report 2024 says the world is ‘plagued by a duo of dangerous crises: climate and conflict,’ which are ‘set against a backdrop of rapidly accelerating technological change and economic uncertainty.’

The World Economic Forum, the group of global elites whom those on the political right love to hate, has just issued its report on the biggest threats in 2024 and beyond.

And at the top of its list of risks is not the climate, at least not immediately.

The WEF, based in Davos, Switzerland, says the biggest short-term risk stems from fake news.

“While climate-related risks remain a dominant theme, the threat from misinformation and disinformation is identified as the most severe short-term threat in the 2024 report,” the group indicated.

“The cascading shocks that have beset the world in recent years are proving intractable. War and conflict, polarized politics, a continuing cost-of-living crisis and the ever-increasing impacts of a changing climate are destabilizing the global order.”

“The report reveals a world ‘plagued by a duo of dangerous crises: climate and conflict.’ These threats are set against a backdrop of rapidly accelerating technological change and economic uncertainty.”

The globalists say “the growing concern about misinformation and disinformation is in large part driven by the potential for AI, in the hands of bad actors, to flood global information systems with false narratives.”

The report states that over the next two years, “foreign and domestic actors alike will leverage misinformation and disinformation to widen societal and political divides.”

It indicates the threat is enhanced by large elections with more than 3 billion people heading to the polls in 2024 and 2025 in the U.S., Britain, and India.

The report suggests the spread of mis- and disinformation could result in civil unrest, but could also drive government-driven censorship, domestic propaganda and controls on the free flow of information.

Rounding out the top ten risks for the next two years are: extreme weather events, societal polarization, cyber insecurity, interstate armed conflict, lack of economic opportunity, inflation, involuntary migration, economic downturn and pollution.

The ten-year list of risks puts extreme weather events at No. 1, followed by critical change to Earth systems, biodiversity loss and economic collapse, natural resource shortages, misinformation and disinformation, adverse outcomes of AI technologies, involuntary migration, cyber insecurity, societal polarization and pollution.

Reprinted with permission from the WND News Center.

 

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Business

EU Tightens Social Media Censorship Screw With Upcoming Mandatory “Disinformation” Rules

Published on

From Reclaim The Net

By

This refers not only to spreading “fact-checking” across the EU member-countries but also to making VLOPs finance these groups. This, is despite the fact many of the most prominent “fact-checkers” have been consistently accused of fostering censorship instead of checking content for accuracy in an unbiased manner.

What started out as the EU’s “voluntary code of practice” concerning “disinformation” – affecting tech/social media companies – is now set to turn into a mandatory code of conduct for the most influential and widely-used ones.

The news was revealed by the Irish media regulator, specifically an official of its digital services, Paul Gordon, who spoke to journalists in Brussels. The EU Commission has yet to confirm that January will be the date when the current code will be “formalized” in this way.

The legislation that would enable the “transition” is the controversial Digital Services Act (DSA), which critics often refer to as the “EU online censorship law,” the enforcement of which started in February of this year.

The “voluntary” code is at this time signed by 44 tech companies, and should it become mandatory in January 2025, it will apply to those the EU defines as Very Large Online Platforms (VLOPs) (with at least 45 million monthly active users in the 27-nation bloc).

Currently, the number of such platforms is said to be 25.

In its present form, the DSA’s provisions obligate online platforms to carry out “disinformation”-related risk assessments and reveal what measures they are taking to mitigate any risks revealed by these assessments.

But when the code switches from “voluntary” to mandatory, these obligations will also include other requirements: demonetizing the dissemination of “disinformation”; platforms, civil society groups, and fact-checkers “effectively cooperating” during elections, once again to address “disinformation” – and, “empowering” fact-checkers.

This refers not only to spreading “fact-checking” across the EU member-countries but also to making VLOPs finance these groups. This, is despite the fact many of the most prominent “fact-checkers” have been consistently accused of fostering censorship instead of checking content for accuracy in an unbiased manner.

The code was first introduced (in its “voluntary” form) in 2022, with Google, Meta, and TikTok among the prominent signatories – while these rules originate from a “strengthened” EU Code of Practice on Disinformation based on the Commission’s Guidance issued in May 2021.

“It is for the signatories to decide which commitments they sign up to and it is their responsibility to ensure the effectiveness of their commitments’ implementation,” the EU said at the time – that would have been the “voluntary” element, while the Commission said the time it had not “endorsed” the code.

It appears the EC is now about to “endorse” the code, and then some – there are active preparations to make it mandatory.

Continue Reading

Censorship Industrial Complex

Joe Rogan Responds To YouTube Censorship of Trump Interview

Published on

From Reclaim The Net

By

Joe Rogan has accused YouTube of making it difficult for users to find his recent interview with former President Donald Trump, saying that the platform initially only displayed short clips from mainstream media instead of the full episode. Rogan sarcastically remarked on YouTube’s actions, saying, “I’m sure it was a mistake at YouTube where you couldn’t search for it. Yeah. I’m sure it was a mistake. It’s just a mistake.”

In episode 2200, Rogan explained that even though his team contacted YouTube multiple times, the episode remained difficult to find. X CEO Elon Musk intervened, contacting Spotify CEO Daniel Ek about the issue. (Spotify exclusively licenses The Joe Rogan Experience but allows the show on third-party platforms like YouTube.)

Watch the video clip here.

Rogan noted the explosive viewership once the content was available, with the episode racking up “six and a half million views on mine and eight plus million on his.”

Emphasizing the episode’s broad reach, Rogan expressed frustration with the initial suppression, stating, “You can’t suppress shit. It doesn’t work. This is the internet. This is 2024. People are going to realize what you’re doing.” He pointed to the significance of this episode’s reach, asking, “If one show has 36 million downloads in two days, like that’s not trending? Like what’s trending for you? Mr. Beast?”

Describing the power of YouTube’s algorithmic influence, Rogan claimed the algorithm worked against the interview’s visibility, only showing clips instead of the full conversation. According to him, when YouTube initially fixed the issue, users had to enter highly specific keywords, like “Joe Rogan Trump interview,” to find the episode.

Rogan argued that YouTube’s gatekeeping reflected an ideological stance, remarking, “They hate it because ideologically they’re opposed to the idea of him being more popular.” He suggested that major tech platforms, such as YouTube and Facebook, which hold significant influence, often push agendas that favor specific narratives, stating, “They didn’t like that this one was slipping away. And so they did something.”

In a telling moment, Rogan noted the impact of the initial suppression, explaining how “the interactions…dropped off a cliff because people couldn’t find it.” He claimed that this caused viewers either to give up or settle for short clips, leading to a dip in views before the episode gained traction on Spotify and X.

 

Continue Reading

Trending

X