Connect with us
[the_ad id="89560"]

Brownstone Institute

They Are Scrubbing the Internet Right Now

Published

14 minute read

From the Brownstone Institute

By Jeffrey A TuckerJeffrey A. TuckerDebbie Lerman  

For the first time in 30 years, we have gone a long swath of time – since October 8-10 – since this service has chronicled the life of the Internet in real time.

Instances of censorship are growing to the point of normalization. Despite ongoing litigation and more public attention, mainstream social media has been more ferocious in recent months than ever before. Podcasters know for sure what will be instantly deleted and debate among themselves over content in gray areas. Some like Brownstone have given up on YouTube in favor of Rumble, sacrificing vast audiences if only to see their content survive to see the light of day.

It’s not always about being censored or not. Today’s algorithms include a range of tools that affect searchability and findability. For example, the Joe Rogan interview with Donald Trump racked up an astonishing 34 million views before YouTube and Google tweaked their search engines to make it hard to discover, while even presiding over a technical malfunction that disabled viewing for many people. Faced with this, Rogan went to the platform X to post all three hours.

Navigating this thicket of censorship and quasi-censorship has become part of the business model of alternative media.

Those are just the headline cases. Beneath the headlines, there are technical events taking place that are fundamentally affecting the ability of any historian even to look back and tell what is happening. Incredibly, the service Archive.org which has been around since 1994 has stopped taking images of content on all platforms. For the first time in 30 years, we have gone a long swath of time – since October 8-10 – since this service has chronicled the life of the Internet in real time.

As of this writing, we have no way to verify content that has been posted for three weeks of October leading to the days of the most contentious and consequential election of our lifetimes. Crucially, this is not about partisanship or ideological discrimination. No websites on the Internet are being archived in ways that are available to users. In effect, the whole memory of our main information system is just a big black hole right now.

The trouble on Archive.org began on October 8, 2024, when the service was suddenly hit with a massive Denial of Service attack (DDOS) that not only took down the service but introduced a level of failure that nearly took it out completely. Working around the clock, Archive.org came back as a read-only service where it stands today. However, you can only read content that was posted before the attack. The service has yet to resume any public display of mirroring of any sites on the Internet.

In other words, the only source on the entire World Wide Web that mirrors content in real time has been disabled. For the first time since the invention of the web browser itself, researchers have been robbed of the ability to compare past with future content, an action that is a staple of researchers looking into government and corporate actions.

It was using this service, for example, that enabled Brownstone researchers to discover precisely what the CDC had said about Plexiglas, filtration systems, mail-in ballots, and rental moratoriums. That content was all later scrubbed off the live Internet, so accessing archive copies was the only way we could know and verify what was true. It was the same with the World Health Organization and its disparagement of natural immunity which was later changed. We were able to document the shifting definitions thanks only to this tool which is now disabled.

What this means is the following: Any website can post anything today and take it down tomorrow and leave no record of what they posted unless some user somewhere happened to take a screenshot. Even then there is no way to verify its authenticity. The standard approach to know who said what and when is now gone. That is to say that the whole Internet is already being censored in real time so that during these crucial weeks, when vast swaths of the public fully expect foul play, anyone in the information industry can get away with anything and not get caught.

We know what you are thinking. Surely this DDOS attack was not a coincidence. The time was just too perfect. And maybe that is right. We just do not know. Does Archive.org suspect something along those lines? Here is what they say:

Last week, along with a DDOS attack and exposure of patron email addresses and encrypted passwords, the Internet Archive’s website javascript was defaced, leading us to bring the site down to access and improve our security. The stored data of the Internet Archive is safe and we are working on resuming services safely. This new reality requires heightened attention to cyber security and we are responding. We apologize for the impact of these library services being unavailable.

Deep state? As with all these things, there is no way to know, but the effort to blast away the ability of the Internet to have a verified history fits neatly into the stakeholder model of information distribution that has clearly been prioritized on a global level. The Declaration of the Future of the Internet makes that very clear: the Internet should be “governed through the multi-stakeholder approach, whereby governments and relevant authorities partner with academics, civil society, the private sector, technical community and others.”  All of these stakeholders benefit from the ability to act online without leaving a trace.

To be sure, a librarian at Archive.org has written that “While the Wayback Machine has been in read-only mode, web crawling and archiving have continued. Those materials will be available via the Wayback Machine as services are secured.”

When? We do not know. Before the election? In five years? There might be some technical reasons but it might seem that if web crawling is continuing behind the scenes, as the note suggests, that too could be available in read-only mode now. It is not.

Disturbingly, this erasure of Internet memory is happening in more than one place. For many years,  Google offered a cached version of the link you were seeking just below the live version. They have plenty of server space to enable that now, but no: that service is now completely gone. In fact, the Google cache service officially ended just a week or two before the Archive.org crash, at the end of September 2024.

Thus the two available tools for searching cached pages on the Internet disappeared within weeks of each other and within weeks of the November 5th election.

Other disturbing trends are also turning Internet search results increasingly into AI-controlled lists of establishment-approved narratives. The web standard used to be for search result rankings to be governed by user behavior, links, citations, and so forth. These were more or less organic metrics, based on an aggregation of data indicating how useful a search result was to Internet users. Put very simply, the more people found a search result useful, the higher it would rank. Google now uses very different metrics to rank search results, including what it considers “trusted sources” and other opaque, subjective determinations.

Furthermore, the most widely used service that once ranked websites based on traffic is now gone. That service was called Alexa. The company that created it was independent. Then one day in 1999, it was bought by Amazon. That seemed encouraging because Amazon was well-heeled. The acquisition seemed to codify the tool that everyone was using as a kind of metric of status on the web. It was common back in the day to take note of an article somewhere on the web and then look it up on Alexa to see its reach. If it was important, one would take notice, but if it was not, no one particularly cared.

This is how an entire generation of web technicians functioned. The system worked as well as one could possibly expect.

Then, in 2014, years after acquiring the ranking service Alexa, Amazon did a strange thing. It released its home assistant (and surveillance device) with the same name. Suddenly, everyone had them in their homes and would find out anything by saying “Hey Alexa.” Something seemed strange about Amazon naming its new product after an unrelated business it had acquired years earlier. No doubt there was some confusion caused by the naming overlap.

Here’s what happened next. In 2022, Amazon actively took down the web ranking tool. It didn’t sell it. It didn’t raise the prices. It didn’t do anything with it. It suddenly made it go completely dark.

No one could figure out why. It was the industry standard, and suddenly it was gone. Not sold, just blasted away. No longer could anyone figure out the traffic-based website rankings of anything without paying very high prices for hard-to-use proprietary products.

All of these data points that might seem unrelated when considered individually, are actually part of a long trajectory that has shifted our information landscape into unrecognizable territory. The Covid events of 2020-2023, with massive global censorship and propaganda efforts, greatly accelerated these trends.

One wonders if anyone will remember what it was once like. The hacking and hobbling of Archive.org underscores the point: there will be no more memory.

As of this writing, fully three weeks of web content have not been archived. What we are missing and what has changed is anyone’s guess. And we have no idea when the service will come back. It is entirely possible that it will not come back, that the only real history to which we can take recourse will be pre-October 8, 2024, the date on which everything changed.

The Internet was founded to be free and democratic. It will require herculean efforts at this point to restore that vision, because something else is quickly replacing it.

Authors

Jeffrey A Tucker

Jeffrey Tucker is Founder, Author, and President at Brownstone Institute. He is also Senior Economics Columnist for Epoch Times, author of 10 books, including Life After Lockdown, and many thousands of articles in the scholarly and popular press. He speaks widely on topics of economics, technology, social philosophy, and culture.

Brownstone Institute

Net Zero: The Mystery of the Falling Fertility

Published on

From the Brownstone Institute

By Tomas FurstTomas Fürst  

If you want to argue that a mysterious factor X is responsible for the drop in fertility, you will have to explain (1) why the factor affected only the vaccinated, and (2) why it started affecting them at about the time of vaccination.

In January 2022, the number of children born in the Czech Republic suddenly decreased by about 10%. By the end of 2022, it had become clear that this was a signal: All the monthly numbers of newborns were mysteriously low.

In April 2023, I wrote a piece for a Czech investigative platform InFakta and suggested that this unexpected phenomenon might be connected to the aggressive vaccination campaign that had started approximately 9 months before the drop in natality. Denik N – a Czech equivalent of the New York Times – immediately came forward with a “devastating takedown” of my article, labeled me a liar and claimed that the pattern can be explained by demographics: There were fewer women in the population and they were getting older.

To compare fertility across countries (and time), the so-called Total Fertility Rate (TFR) is used. Roughly speaking, it is the average number of children that are born to a woman over her lifetime. TFR is independent of the number of women and of their age structure. Figure 1 below shows the evolution of TFR in several European countries between 2001 and 2023. I selected countries that experienced a similar drop in TFR in 2022 as the Czech Republic.

Figure 1. The evolution of Total Fertility Rate in selected European countries between 2000 and 2023. The data corresponding to a particular year are plotted at the end of the column representing that year.

So, by the end of 2023, the following two points were clear:

  1. The drop in natality in the Czech Republic in 2022 could not be explained by demographic factors. Total fertility rate – which is independent of the number of women and their age structure – dropped sharply in 2022 and has been decreasing ever since. The data for 2024 show that the Czech TFR has decreased further to 1.37.
  1. Many other European countries experienced the same dramatic and unexpected decrease in fertility that started at the beginning of 2022. I have selected some of them for Figure 1 but there are more: The Netherlands, Norway, Slovakia, Slovenia, and Sweden. On the other hand, there are some countries that do not show a sudden drop in TFR, but rather a steady decline over a longer period (e.g. Belgium, France, UK, Greece, or Italy). Notable exceptions are Bulgaria, Spain, and Portugal where fertility has increased (albeit from very low numbers). The Human Fertility Project database has all the numbers.

This data pattern is so amazing and unexpected that even the mainstream media in Europe cannot avoid the problem completely. From time to time, talking heads with many academic titles appear and push one of the politically correct narratives: It’s Putin! (Spoiler alert: The war started in February 2022; however, children not born in 2022 were not conceived in 2021). It’s the inflation caused by Putin! (Sorry, that was even later). It’s the demographics! (Nope, see above, TFR is independent of the demographics).

Thus, the “v” word keeps creeping back into people’s minds and the Web’s Wild West is ripe with speculation. We decided not to speculate but to wrestle some more data from the Czech government. For many months, we were trying to acquire the number of newborns in each month, broken down by age and vaccination status of the mother. The post-socialist health-care system of our country is a double-edged sword: On one hand, the state collects much more data about citizens than an American would believe. On the other hand, we have an equivalent of the FOIA, and we are not afraid to use it. After many months of fruitless correspondence with the authorities, we turned to Jitka Chalankova – a Czech Ron Johnson in skirts – who finally managed to obtain an invaluable data sheet.

To my knowledge, the datasheet (now publicly available with an English translation here) is the only officially released dataset containing a breakdown of newborns by the Covid-19 vaccination status of the mother. We requested much more detailed data, but this is all we got. The data contains the number of births per month between January 2021 and December 2023 given by women (aged 18-39) who were vaccinated, i.e., had received at least one Covid vaccine dose by the date of delivery, and by women who were unvaccinated, i.e., had not received any dose of any Covid vaccine by the date of delivery.

Furthermore, the numbers of births per month by women vaccinated by one or more doses during pregnancy were provided. This enabled us to estimate the number of women who were vaccinated before conception. Then, we used open data on the Czech population structure by age, and open data on Covid vaccination by day, sex, and age.

Combining these three datasets, we were able to estimate the rates of successful conceptions (i.e., conceptions that led to births nine months later) by preconception vaccination status of the mother. Those interested in the technical details of the procedure may read Methods in the newly released paper. It is worth mentioning that the paper had been rejected without review in six high-ranking scientific journals. In Figure 2, we reprint the main finding of our analysis.

Figure 2A. Histogram showing the percentage of women in the Czech Republic aged 18–39 years who were vaccinated with at least one dose of a Covid-19 vaccine by the end of the respective month. Figure 2B. Estimates of the number of successful conceptions (SCs) per 1,000 women aged 18–39 years according to their pre-conception Covid vaccination status. The blue-shaded areas in Figure 1B show the intervals between the lower and upper estimates of the true SC rates for women vaccinated (dark blue) and unvaccinated (light blue) before conception.

Figure 2 reveals several interesting patterns that I list here in order of importance:

  1. Vaccinated women conceived about a third fewer children than would be expected from their share of the population. Unvaccinated women conceived at about the same rate as all women before the pandemic. Thus, a strong association between Covid vaccination status and successful conceptions has been established.
  2. In the second half of 2021, there was a peak in the rate of conceptions of the unvaccinated (and a corresponding trough in the vaccinated). This points to rather intelligent behavior of Czech women, who – contrary to the official advice – probably avoided vaccination if they wanted to get pregnant. This concentrated the pregnancies in the unvaccinated group and produced the peak.
  3. In the first half of 2021, there was significant uncertainty in the estimates of the conception rates. The lower estimate of the conception rate in the vaccinated was produced by assuming that all women vaccinated (by at least one dose) during pregnancy were unvaccinated before conception. This was almost certainly true in the first half of 2021 because the vaccines were not available prior to 2021. The upper estimate was produced by assuming that all women vaccinated (by at least one dose) during pregnancy also received at least one dose before conception. This was probably closer to the truth in the second part of 2021. Thus, we think that the true conception rates for the vaccinated start close to the lower bound in early 2021 and end close to the upper bound in early 2022. Once again, we would like to be much more precise, but we have to work with what we have got.

Now that the association between Covid-19 vaccination and lower rates of conception has been established, the one important question looms: Is this association causal? In other words, did the Covid-19 vaccines really prevent women from getting pregnant?

The guardians of the official narrative brush off our findings and say that the difference is easily explained by confounding: The vaccinated tend to be older, more educated, city-dwelling, more climate change aware…you name it. That all may well be true, but in early 2022, the TFR of the whole population dropped sharply and has been decreasing ever since.

So, something must have happened in the spring of 2021. Had the population of women just spontaneously separated into two groups – rednecks who wanted kids and didn’t want the jab, and city slickers who didn’t want kids and wanted the jab – the fertility rate of the unvaccinated would indeed be much higher than that of the vaccinated. In that respect, such a selection bias could explain the observed pattern. However, had this been true, the total TFR of the whole population would have remained constant.

But this is not what happened. For some reason, the TFR of the whole population jumped down in January 2022 and has been decreasing ever since. And we have just shown that, for some reason, this decrease in fertility affected only the vaccinated. So, if you want to argue that a mysterious factor X is responsible for the drop in fertility, you will have to explain (1) why the factor affected only the vaccinated, and (2) why it started affecting them at about the time of vaccination. That is a tall order. Mr. Occam and I both think that X = the vaccine is the simplest explanation.

What really puzzles me is the continuation of the trend. If the vaccines really prevented conception, shouldn’t the effect have been transient? It’s been more than three years since the mass vaccination event, but fertility rates still keep falling. If this trend continues for another five years, we may as well stop arguing about pensions, defense spending, healthcare reform, and education – because we are done. 

We are in the middle of what may be the biggest fertility crisis in the history of mankind. The reason for the collapse in fertility is not known. The governments of many European countries have the data that would unlock the mystery. Yet, it seems that no one wants to know.


Author

Tomas Furst

Tomas Fürst teaches applied mathematics at Palacky University, Czech Republic. His background is in mathematical modelling and Data Science. He is a co-founder of the Association of Microbiologists, Immunologists, and Statisticians (SMIS) which has been providing the Czech public with data-based and honest information about the coronavirus epidemic. He is also a co-founder of a “samizdat” journal dZurnal which focuses on uncovering scientific misconduct in Czech Science.

Continue Reading

Brownstone Institute

FDA Exposed: Hundreds of Drugs Approved without Proof They Work

Published on

From the Brownstone Institute

By Maryanne Demasi

The US Food and Drug Administration (FDA) has approved hundreds of drugs without proof that they work—and in some cases, despite evidence that they cause harm.

That’s the finding of a blistering two-year investigation by medical journalists Jeanne Lenzer and Shannon Brownleepublished by The Lever.

Reviewing more than 400 drug approvals between 2013 and 2022, the authors found the agency repeatedly ignored its own scientific standards.

One expert put it bluntly—the FDA’s threshold for evidence “can’t go any lower because it’s already in the dirt.”

A System Built on Weak Evidence

The findings were damning—73% of drugs approved by the FDA during the study period failed to meet all four basic criteria for demonstrating “substantial evidence” of effectiveness.

Those four criteria—presence of a control group, replication in two well-conducted trials, blinding of participants and investigators, and the use of clinical endpoints like symptom relief or extended survival—are supposed to be the bedrock of drug evaluation.

Yet only 28% of drugs met all four criteria—40 drugs met none.

These aren’t obscure technicalities—they are the most basic safeguards to protect patients from ineffective or dangerous treatments.

But under political and industry pressure, the FDA has increasingly abandoned them in favour of speed and so-called “regulatory flexibility.”

Since the early 1990s, the agency has relied heavily on expedited pathways that fast-track drugs to market.

In theory, this balances urgency with scientific rigour. In practice, it has flipped the process. Companies can now get drugs approved before proving that they work, with the promise of follow-up trials later.

But, as Lenzer and Brownlee revealed, “Nearly half of the required follow-up studies are never completed—and those that are often fail to show the drugs work, even while they remain on the market.”

“This represents a seismic shift in FDA regulation that has been quietly accomplished with virtually no awareness by doctors or the public,” they added.

More than half the approvals examined relied on preliminary data—not solid evidence that patients lived longer, felt better, or functioned more effectively.

And even when follow-up studies are conducted, many rely on the same flawed surrogate measures rather than hard clinical outcomes.

The result: a regulatory system where the FDA no longer acts as a gatekeeper—but as a passive observer.

Cancer Drugs: High Stakes, Low Standards

Nowhere is this failure more visible than in oncology.

Only 3 out of 123 cancer drugs approved between 2013 and 2022 met all four of the FDA’s basic scientific standards.

Most—81%—were approved based on surrogate endpoints like tumour shrinkage, without any evidence that they improved survival or quality of life.

Take Copiktra, for example—a drug approved in 2018 for blood cancers. The FDA gave it the green light based on improved “progression-free survival,” a measure of how long a tumour stays stable.

But a review of post-marketing data showed that patients taking Copiktra died 11 months earlier than those on a comparator drug.

It took six years after those studies showed the drug reduced patients’ survival for the FDA to warn the public that Copiktra should not be used as a first- or second-line treatment for certain types of leukaemia and lymphoma, citing “an increased risk of treatment-related mortality.”

Elmiron: Ineffective, Dangerous—And Still on the Market

Another striking case is Elmiron, approved in 1996 for interstitial cystitis—a painful bladder condition.

The FDA authorized it based on “close to zero data,” on the condition that the company conduct a follow-up study to determine whether it actually worked.

That study wasn’t completed for 18 years—and when it was, it showed Elmiron was no better than placebo.

In the meantime, hundreds of patients suffered vision loss or blindness. Others were hospitalized with colitis. Some died.

Yet Elmiron is still on the market today. Doctors continue to prescribe it.

“Hundreds of thousands of patients have been exposed to the drug, and the American Urological Association lists it as the only FDA-approved medication for interstitial cystitis,” Lenzer and Brownlee reported.

“Dangling Approvals” and Regulatory Paralysis

The FDA even has a term—”dangling approvals”—for drugs that remain on the market despite failed or missing follow-up trials.

One notorious case is Avastin, approved in 2008 for metastatic breast cancer.

It was fast-tracked, again, based on ‘progression-free survival.’ But after five clinical trials showed no improvement in overall survival—and raised serious safety concerns—the FDA moved to revoke its approval for metastatic breast cancer.

The backlash was intense.

Drug companies and patient advocacy groups launched a campaign to keep Avastin on the market. FDA staff received violent threats. Police were posted outside the agency’s building.

The fallout was so severe that for more than two decades afterwards, the FDA did not initiate another involuntary drug withdrawal in the face of industry opposition.

Billions Wasted, Thousands Harmed

Between 2018 and 2021, US taxpayers—through Medicare and Medicaid—paid $18 billion for drugs approved under the condition that follow-up studies would be conducted. Many never were.

The cost in lives is even higher.

A 2015 study found that 86% of cancer drugs approved between 2008 and 2012 based on surrogate outcomes showed no evidence that they helped patients live longer.

An estimated 128,000 Americans die each year from the effects of properly prescribed medications—excluding opioid overdoses. That’s more than all deaths from illegal drugs combined.

A 2024 analysis by Danish physician Peter Gøtzsche found that adverse effects from prescription medicines now rank among the top three causes of death globally.

Doctors Misled by the Drug Labels

Despite the scale of the problem, most patients—and most doctors—have no idea.

A 2016 survey published in JAMA asked practising physicians a simple question—what does FDA approval actually mean?

Only 6% got it right.

The rest assumed that it meant the drug had shown clear, clinically meaningful benefits—such as helping patients live longer or feel better—and that the data was statistically sound.

But the FDA requires none of that.

Drugs can be approved based on a single small study, a surrogate endpoint, or marginal statistical findings. Labels are often based on limited data, yet many doctors take them at face value.

Harvard researcher Aaron Kesselheim, who led the survey, said the results were “disappointing, but not entirely surprising,” noting that few doctors are taught about how the FDA’s regulatory process actually works.

Instead, physicians often rely on labels, marketing, or assumptions—believing that if the FDA has authorized a drug, it must be both safe and effective.

But as The Lever investigation shows, that is not a safe assumption.

And without that knowledge, even well-meaning physicians may prescribe drugs that do little good—and cause real harm.

Who Is the FDA Working for?

In interviews with more than 100 experts, patients, and former regulators, Lenzer and Brownlee found widespread concern that the FDA has lost its way.

Many pointed to the agency’s dependence on industry money. A BMJ investigation in 2022 found that user fees now fund two-thirds of the FDA’s drug review budget—raising serious questions about independence.

Yale physician and regulatory expert Reshma Ramachandran said the system is in urgent need of reform.

“We need an agency that’s independent from the industry it regulates and that uses high-quality science to assess the safety and efficacy of new drugs,” she told The Lever. “Without that, we might as well go back to the days of snake oil and patent medicines.”

For now, patients remain unwitting participants in a vast, unspoken experiment—taking drugs that may never have been properly tested, trusting a regulator that too often fails to protect them.

And as Lenzer and Brownlee conclude, that trust is increasingly misplaced.

Republished from the author’s Substack

 

Author

Maryanne Demasi, 2023 Brownstone Fellow, is an investigative medical reporter with a PhD in rheumatology, who writes for online media and top tiered medical journals. For over a decade, she produced TV documentaries for the Australian Broadcasting Corporation (ABC) and has worked as a speechwriter and political advisor for the South Australian Science Minister.

Continue Reading

Trending

X