Connect with us

Great Reset

Assisted suicide activists should not be running our MAID program

Published

6 minute read

From the MacDonald Laurier Institute

By Shawn Whatley

We should keep the right-to-die foxes out of the regulatory henhouse

The federal government chose a right-to-die advocacy group to help implement its medical assistance in dying legislation. It’s a classic case of regulatory capture, otherwise known as letting the foxes guard the henhouse.

In the “Fourth annual report on Medical Assistance in Dying in Canada 2022,” the federal government devoted several paragraphs of praising to the Canadian Association of MAID Assessors and Providers (CAMAP).

“Since its inception in 2017, (CAMAP) has been and continues to be an important venue for information sharing among health-care professionals and other stakeholders involved in MAID,” reads the report.

With $3.3 million in federal funding, “CAMAP has been integral in creating a MAID assessor/provider community of practice, hosts an annual conference to discuss emerging issues related to the delivery of MAID and has developed several guidance materials for health-care professionals.”

Six clinicians in British Columbia formed CAMAP, a national non-profit association, in October 2016. These six right-to-die advocates published clinical guidelines for MAID in 2017, without seriously consulting other physician organizations.

The guidelines educate clinicians on their “professional obligation to (bring) up MAID as a care option for patients, when it is medically relevant and they are likely eligible for MAID.” CAMAP’s guidelines apply to Canada’s 96,000 physicians312,000 nurses and the broader health-care workforce of two-million Canadians, wherever patients are involved.

The rise of CAMAP overlaps with right-to-die advocacy work in Canada. According to Sandra Martin, writing in the Globe and Mail, CAMAP “follow(ed) in the steps of Dying with Dignity,” an advocacy organization started in the 1980s, and “became both a public voice and a de facto tutoring service for doctors, organizing information-swapping and self-help sessions for members.”

Prime Minister Justin Trudeau tapped this “tutoring service” to lead the MAID program. CAMAP appears to follow the steps of Dying with Dignity, because the same people lead both groups. For example, Shanaaz Gokool, a current director of CAMAPserved as CEO of Dying with Dignity from 2016 to 2019.

A founding member and current chair of the board of directors of CAMAP is also a member of Dying with Dignity’s clinician advisory council. One of the advisory council’s co-chairs is also a member of Dying with Dignity’s board of directors, as well as a moderator of the CAMAP MAID Providers Forum. The other advisory council co-chair served on both the boards of CAMAP and Dying with Dignity at the same time.

Overlap between CAMAP and Dying with Dignity includes CAMAP founders, board members (past and present), moderators, research directors and more, showing that a small right-to-die advocacy group birthed a tiny clinical group, which now leads the MAID agenda in Canada. This is a problem because it means that a small group of activists exert outsized control over a program that has serious implications for many Canadians.

George Stigler, a Noble-winning economist, described regulatory capture in the 1960s, showing how government agencies can be captured to serve special interests.

Instead of serving citizens, focused interests can shape governments to serve narrow and select ends. Pharmaceutical companies work hard to write the rules that regulate their industry. Doctors demand government regulations — couched in the name of patient safety — to decrease competition. The list is endless.

Debates about social issues can blind us to basic governance. Anyone who criticizes MAID governance is seen as being opposed to assisted death and is shut out of the debate. At the same time, the world is watching Canada and trying to figure out what is going on with MAID and why we are so different than other jurisdictions offering assisted suicide.

Canada moved from physician assisted suicide being illegal to becoming a world leader in organ donation after assisted death in the space of just six years.

In 2021, Quebec surpassed the Netherlands to lead the world in per capita deaths by assisted suicide, with 5.1 per cent of deaths due to MAID in Quebec, 4.8 per cent in the Netherlands and 2.3 per cent in Belgium. In 2022, Canada extended its lead: MAID now represents 4.1 per cent of all deaths in Canada.

How did this happen so fast? Some point to patients choosing MAID instead of facing Canada’s world-famous wait times for care. Others note a lack of social services. No doubt many factors fuel our passion for MAID, but none of these fully explain the phenomenon. In truth, Canada became world-famous for euthanasia and physician-assisted suicide because we put right-to-die advocates in charge of assisted death.

Regardless of one’s stance on MAID, regulatory capture is a well-known form of corruption. We should expect governments to avoid obvious conflicts of interest. Assuming Canadians want robust and ready access to MAID (which might itself assume too much), at least we should keep the right-to-die foxes out of the regulatory henhouse.

Shawn Whatley is a physician, a Munk senior fellow with the Macdonald-Laurier Institute and author of “When Politics Comes Before Patients: Why and How Canadian Medicare is Failing.”

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Business

The EU Insists Its X Fine Isn’t About Censorship. Here’s Why It Is.

Published on

logo

By

Europe calls it transparency, but it looks a lot like teaching the internet who’s allowed to speak.

When the European Commission fined X €120 million on December 5, officials could not have been clearer. This, they said, was not about censorship. It was just about “transparency.”
They repeat it so often you start to wonder why.
The fine marks the first major enforcement of the Digital Services Act, Europe’s new censorship-driven internet rulebook.
It was sold as a consumer protection measure, designed to make online platforms safer and more accountable, and included a whole list of censorship requirements, fining platforms that don’t comply.
The Commission charged X with three violations: the paid blue checkmark system, the lack of advertising data, and restricted data access for researchers.
None of these touches direct content censorship. But all of them shape visibility, credibility, and surveillance, just in more polite language.
Musk’s decision to turn blue checks into a subscription feature ended the old system where establishment figures, journalists, politicians, and legacy celebrities got verification.
The EU called Musk’s decision “deceptive design.” The old version, apparently, was honesty itself. Before, a blue badge meant you were important. After, it meant you paid. Brussels prefers the former, where approved institutions get algorithmic priority, and the rest of the population stays in the queue.
The new system threatened that hierarchy. Now, anyone could buy verification, diluting the aura of authority once reserved for anointed voices.
Reclaim The Net is sustained by its readers.
Your support fuels the fight for privacy, free speech and digital civil liberties while giving you access to exclusive content, practical how to guides, premium features and deeper dives into freedom-focused tech.
Become a supporter here.
However, that’s not the full story. Under the old Twitter system, verification was sold as a public service, but in reality it worked more like a back-room favor and a status purchase.
The main application process was shut down in 2010, so unless you were already famous, the only way to get a blue check was to spend enough money on advertising or to be important enough to trigger impersonation problems.
Ad Age reported that advertisers who spent at least fifteen thousand dollars over three months could get verified, and Twitter sales reps told clients the same thing. That meant verification was effectively a perk reserved for major media brands, public figures, and anyone willing to pay. It was a symbol of influence rationed through informal criteria and private deals, creating a hierarchy shaped by cronyism rather than transparency.
Under the new X rules, everyone is on a level playing field.
Government officials and agencies now sport gray badges, symbols of credibility that can’t be purchased. These are the state’s chosen voices, publicly marked as incorruptible. To the EU, that should be a safeguard.
The second and third violations show how “transparency” doubles as a surveillance mechanism. X was fined for limiting access to advertising data and for restricting researchers from scraping platform content. Regulators called that obstruction. Musk called it refusing to feed the censorship machine.
The EU’s preferred researchers aren’t neutral archivists. Many have been documented coordinating with governments, NGOs, and “fact-checking” networks that flagged political content for takedown during previous election cycles.
They call it “fighting disinformation.” Critics call it outsourcing censorship pressure to academics.
Under the DSA, these same groups now have the legal right to demand data from platforms like X to study “systemic risks,” a phrase broad enough to include whatever speech bureaucrats find undesirable this month.
The result is a permanent state of observation where every algorithmic change, viral post, or trending topic becomes a potential regulatory case.
The advertising issue completes the loop. Brussels says it wants ad libraries to be fully searchable so users can see who’s paying for what. It gives regulators and activists a live feed of messaging, ready for pressure campaigns.
The DSA doesn’t delete ads; it just makes it easier for someone else to demand they be deleted.
That’s how this form of censorship works: not through bans, but through endless exposure to scrutiny until platforms remove the risk voluntarily.
The Commission insists, again and again, that the fine has “nothing to do with content.”
That may be true on a direct level, but the rules shape content all the same. When governments decide who counts as authentic, who qualifies as a researcher, and how visibility gets distributed, speech control doesn’t need to be explicit. It’s baked into the system.
Brussels calls it user protection. Musk calls it punishment for disobedience. This particular DSA fine isn’t about what you can say, it’s about who’s allowed to be heard saying it.
TikTok escaped similar scrutiny by promising to comply. X didn’t, and that’s the difference. The EU prefers companies that surrender before the hearing. When they don’t, “transparency” becomes the pretext for a financial hammer.
The €120 million fine is small by tech standards, but symbolically it’s huge.
It tells every platform that “noncompliance” means questioning the structure of speech the EU has already defined as safe.
In the official language of Brussels, this is a regulation. But it’s managed discourse, control through design, moderation through paperwork, censorship through transparency.
And the louder they insist it isn’t, the clearer it becomes that it is.
Reclaim The Net Needs Your Help
With your help, we can do more than hold the line. We can push back. We can expose censorship, highlight surveillance overreach, and amplify the voices of those being silenced.
If you have found value in our work, please consider becoming a supporter.
Your support does more than keep us independent. It also gives you access to exclusive content, deep dive exploration of freedom focused technology, member-only features, and practical how-to posts that help you protect your rights in the real world.
You help us expand our reach, educate more people, and continue this fight.
Please become a supporter today.
Thank you for your support.
Continue Reading

Censorship Industrial Complex

US Condemns EU Censorship Pressure, Defends X

Published on

US Vice President JD Vance criticized the European Union this week after rumors reportedly surfaced that Brussels may seek to punish X for refusing to remove certain online speech.

In a post on X, Vance wrote, “Rumors swirling that the EU commission will fine X hundreds of millions of dollars for not engaging in censorship. The EU should be supporting free speech not attacking American companies over garbage.”

His remarks reflect growing tension between the United States and the EU over the future of online speech and the expanding role of governments in dictating what can be said on global digital platforms.

Screenshot of a verified social-media post with a profile photo, reading: "Rumors swirling that the EU commission will fine X hundreds of millions of dollars for not engaging in censorship. The EU should be supporting free speech not attacking American companies over garbage." Timestamp Dec 4, 2025, 5:03 PM and "1.1M Views" shown.

Vance was likely referring to rumors that Brussels intends to impose massive penalties under the bloc’s Digital Services Act (DSA), a censorship framework that requires major platforms to delete what regulators define as “illegal” or “harmful” speech, with violations punishable by fines up to six percent of global annual revenue.

For Vance, this development fits a pattern he’s been warning about since the spring.

In a May 2025 interview, he cautioned that “The kind of social media censorship that we’ve seen in Western Europe, it will and in some ways, it already has, made its way to the United States. That was the story of the Biden administration silencing people on social media.”

He added, “We’re going to be very protective of American interests when it comes to things like social media regulation. We want to promote free speech. We don’t want our European friends telling social media companies that they have to silence Christians or silence conservatives.”

Yet while the Vice President points to Europe as the source of the problem, a similar agenda is also advancing in Washington under the banner of “protecting children online.”

This week’s congressional hearing on that subject opened in the usual way: familiar talking points, bipartisan outrage, and the recurring claim that online censorship is necessary for safety.

The House Subcommittee on Commerce, Manufacturing, and Trade convened to promote a bundle of bills collectively branded as the “Kids Online Safety Package.”

The session, titled “Legislative Solutions to Protect Children and Teens Online,” quickly turned into a competition over who could endorse broader surveillance and moderation powers with the most moral conviction.

Rep. Gus Bilirakis (R-FL) opened the hearing by pledging that the bills were “mindful of the Constitution’s protections for free speech,” before conceding that “laws with good intentions have been struck down for violating the First Amendment.”

Despite that admission, lawmakers from both parties pressed ahead with proposals requiring digital ID age verification systems, platform-level content filters, and expanded government authority to police online spaces; all similar to the EU’s DSA censorship law.

Vance has cautioned that these measures, however well-intentioned, mark a deeper ideological divide. “It’s not that we are not friends,” he said earlier this year, “but there’re gonna have some disagreements you didn’t see 10 years ago.”

That divide is now visible on both sides of the Atlantic: a shared willingness among policymakers to restrict speech for perceived social benefit, and a shrinking space for those who argue that freedom itself is the safeguard worth protecting.

If you’re tired of censorship and surveillance, join Reclaim The Net.

Fight censorship and surveillance. Reclaim your digital freedom.

Get news updates, features, and alternative tech explorations to defend your digital rights.

Continue Reading

Trending

X