Artificial Intelligence
Save Taylor Swift. Stop deep-fake porn: Peter Menzies
Photo by Michael Hicks, via Flickr
From the MacDonald Laurier Institute
By Peter Menzies
Tweak an existing law to ensure AI-generated porn that uses the images of real people is made illegal.
Hey there, Swifties.
Stop worrying about whether your girl can make it back from a tour performance in Tokyo in time to cheer on her boyfriend in Super Bowl LVIII.
Please shift your infatuation away from your treasured superstar’s romantic attachment to Kansas City Chiefs’ dreamy Travis Kelce and his pending battle with the San Francisco 49ers. We all know Taylor Swift’ll be in Vegas for kickoff on Feb. 11. She’ll get there. Billionaires always find a way. And, hey, what modern woman wouldn’t take a 27-hour round trip flight to hang out with a guy ranked #1 on People’s sexiest men in sports list?
But right now, Swifties, Canada needs you to concentrate on something more important than celebrity canoodling. Your attention needs to be on what the nation’s self-styled feminist government should be doing to protect Swift (and all women) from being “deep-faked” into online porn stars.
Because that’s exactly what happened to the multiple Grammy Award-winner last week when someone used artificial intelligence to post deep-fakes (manipulated images of bodies and faces) of her that spread like a coronavirus across the internet. Swift’s face was digitally grafted onto the body of someone engaged in sexual acts/poses in a way that was convincing enough to fool some into believing that it was Swift herself. Before they were contained, the deep-fakes were viewed by millions. The BBC reported that one single “photo” had accumulated 47 million views.
For context, a 2019 study by Deeptrace Labs identified almost 15,000 deep-fakes on streaming and porn sites — twice as many as the previous year — and concluded that 96 per cent were recreations of celebrity women. Fair to assume the fakes have continued to multiply like bunnies in spring time.
In response to the Swift images, the platform formerly known as Twitter — X — temporarily blocked searches for “Taylor Swift” as it battled to eliminate the offending depictions which still found ways to show up elsewhere.
X said it was “actively removing” the deep-fakes while taking “appropriate actions” against those spreading them.
Meta said it has “strict policies that prohibit this kind of behavior” adding that it also takes “several steps to combat the spread of AI deepfakes.”
Google Deepmind launched an initiative last summer to improve detection of AI-generated images but critics say it, too, struggles to keep up.
While the creation of images to humiliate women goes back to the puerile pre-internet writing of “for a good time call” phone numbers on the walls of men’s washrooms, the use of technology to abuse women shows how difficult it is for governments to keep pace with change. The Americans are now pondering bipartisan legislation to stop this, the Brits are boasting that such outrageousness is already covered by their Online Safety Act and Canada so far …. appears to be doing nothing.
Maybe that’s because it thinks that Section 162 of the Criminal Code, which bans the distribution or transmission of intimate images without permission of the person or people involved, has it covered.
To wit, “Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty of an indictable offence and liable to imprisonment for a term of not more than five years.”
Maybe Crown prosecutors are confident they can talk judges into interpreting that legislation in a fashion that brings deep-fakes into scope. It’s not like eminent justices haven’t previously pondered legislation — or the Charter for that matter— and then “read in” words that they think should be there.
Police in Winnipeg recently launched an investigation in December when AI-generated fake photos were spread. And a Quebec man was convicted recently when he used AI to create child porn — a first.
But anytime technology overrides the law, there’s a risk that the former turns the latter into an ass.
Which means there’s a real easy win here for the Justin Trudeau government which, when it comes to issues involving the internet, has so far behaved like a band of bumbling hillbillies.
The Online Streaming Act, in two versions, was far more contentious than necessary because those crafting it clearly had difficulty grasping the simple fact that the internet is neither broadcasting nor a cable network. And the Online News Act, which betrayed a complete misunderstanding of how the internet, global web giants and digital advertising work, remains in the running for Worst Legislation Ever, having cost the industry it was supposed to assist at least $100 million and helped it double down on its reputation for grubbiness.
Anticipated now in the spring after being first promised in 2019, the Online Harms Act has been rattling around the Department of Heritage consultations since 2019. Successive heritage ministers have failed to craft anything that’ll pass muster with the Charter of Rights and Freedoms so the whole bundle is now with Justice Minister Arif Virani, who replaced David Lametti last summer.
The last thing Canada needs right now is for the PMO to jump on the rescue Taylor Swift bandwagon and use deep-fakes as one more excuse to create, as it originally envisioned, a Digital Safety czar with invasive ready, fire, aim powers to order take downs of anything they find harmful or hurtful. Given its recent legal defeats linked to what appears to be a chronic inability to understand the Constitution, that could only end in yet another humiliation.
So, here’s the easy win. Amend Section 162 of the Criminal Code so that the use of deep-fakes to turn women into online porn stars against their will is clearly in scope. It’ll take just a few words. It’ll involve updating existing legislation that isn’t the slightest bit contentious. Every party will support it. It’ll make you look good. Swifties will love you.
And, best of all, it’ll actually be the right thing to do.
Peter Menzies is a senior fellow with the Macdonald-Laurier Institute, past vice-chair of the CRTC and a former newspaper publisher.
Artificial Intelligence
Canadian Court Upholds Ban on Clearview AI’s Unconsented Facial Data Collection
Clearview AI is said to subjecting billions of people to this, without consent. From there, the implications for privacy, free speech, and even data security are evident.
Facial recognition company Clearview AI has suffered a legal setback in Canada, where the Supreme Court of British Columbia decided to throw out the company’s petition aimed at cancelling an Information and Privacy Commissioner’s order.
The order aims to prevent Clearview AI from collecting facial biometric data for biometric comparison in the province without the targeted individuals’ consent.
We obtained a copy of the order for you here.
The controversial company markets itself as “an investigative platform” that helps law enforcement identify suspects, witnesses, and victims.
Privacy advocates critical of Clearview AI’s activities, however, see it as a major component in the burgeoning facial surveillance industry, stressing in particular the need to obtain consent – via opt-ins – before people’s facial biometrics can be collected.
And Clearview AI is said to subjecting billions of people to this, without consent. From there, the implications for privacy, free speech, and even data security are evident.
The British Columbia Commissioner appears to have been thinking along the same lines when issuing the order, that bans Clearview from selling biometric facial arrays taken from non-consenting individuals to its clients.
In addition, the order instructs Clearview to “make best efforts” to stop the practice in place so far, which includes collection, use, and disclosure of personal data – but also delete this type of information already in the company’s possession.
Right now, there is no time limit to how long Clearview can retain the data, which it collects from the internet using an automated “image crawler.”
Clearview moved to try to get the order dismissed as “unreasonable,” arguing that on the one hand, it is unable to tell if an image of a persons face is that of a Canadian, while also claiming that no Canadian law is broken since this biometric information is available online publicly.
The legal battle, however, revealed that images of faces of residents of British Columbia, children included, are among Clearview’s database of more than three billion photos (of Canadians) – while the total figure is over 50 billion.
The court also finds the Commissioner’s order to be very reasonable indeed – including when rejecting “Clearview’s bald assertion” that, in British Columbia, “it simply could not do” what it does in the US state of Illinois, to comply with the Biometric Information Privacy Act (BIPA).
If you’re tired of censorship and surveillance, subscribe to Reclaim The Net.
Artificial Intelligence
Death of an Open A.I. Whistleblower
By John Leake
Suchir Balaji was trying to warn the world of the dangers of Open A.I. when he was found dead in his apartment. His story suggests that San Francisco has become an open sewer of corruption.
According to Wikipedia:
Suchir Balaji (1998 – November 26, 2024) was an artificial intelligence researcher and former employee of OpenAI, where he worked from 2020 until 2024. He gained attention for his whistleblowing activities related to artificial intelligence ethics and the inner workings of OpenAI.
Balaji was found dead in his home on November 26, 2024. San Francisco authorities determined the death was a suicide, though Balaji’s parents have disputed the verdict.
Balaji’s mother just gave an extraordinary interview with Tucker Carlson that is well worth watching.
If her narrative is indeed accurate, it indicates that someone has induced key decision makers within the San Francisco Police and Medical Examiner’s Office to turn a blind eye to the obvious indications that Balaji was murdered. Based on the story that his mother told Tucker Carlson, the key corrupt figure in the medical examiner’s office is David Serrano Sewell—Executive Director of the Office of the Chief Medical Examiner.
A quick Google search of Mr. Serrano Sewell resulted in a Feb. 8, 2024 report in the San Francisco Standard headlined San Francisco official likely tossed out human skull, lawsuit says. According to the report:
The disappearance of a human skull has spurred a lawsuit against the top administrator of San Francisco’s medical examiner’s office from an employee who alleges she faced retaliation for reporting the missing body part.
Sonia Kominek-Adachi alleges in a lawsuit filed Monday that she was terminated from her job as a death investigator after finding that the executive director of the office, David Serrano Sewell, may have “inexplicably” tossed the skull while rushing to clean up the office ahead of an inspection.
Kominek-Adachi made the discovery in January 2023 while doing an inventory of body parts held by the office, her lawsuit says. Her efforts to raise an alarm around the missing skull allegedly led up to her firing last October.
If the allegations of this lawsuit are true, they suggest that Mr. Serrano is an unscrupulous and vindictive man. According to the SF Gov website:
Serrano Sewell joined the OCME with over 16 years of experience developing management structures, building consensus, and achieving policy improvements in the public, nonprofit, and private sectors. He previously served as a Mayor’s aide, Deputy City Attorney, and a policy advocate for public and nonprofit hospitals.
In other words, he is an old denizen of the San Francisco city machine. If a mafia-like organization has penetrated the city administration, it would be well-served by having a key player run the medical examiner’s office.
According to Balaji’s mother, Poornima Ramarao, his death was an obvious murder that was crudely staged to look like a suicide. The responding police officers only spent forty minutes examining the scene, and then left the body in the apartment to be retrieved by medical examiner field agents the next day. If true, this was an act of breathtaking negligence.
I have written a book about two murders that were staged to look like suicides, and to me, Mrs. Ramarao’s story sounds highly credible. Balaji kept a pistol in his apartment for self defense because he felt that his life was possibly in danger. He was found shot in the head with this pistol, which was purportedly found in his hand. If his death was indeed a murder staged to look like a suicide, it raises the suspicion that the assailant knew that Balaji possessed this pistol and where he kept it in his apartment.
Balaji was found with a gunshot wound to his head—fired from above, the bullet apparently traversing downward through his face and missing his brain. However, he had also sustained what—based on his mother’s testimony—sounds like a blunt force injury on the left side of the head, suggesting a right-handed assailant initially struck him with a blunt instrument that may have knocked him unconscious or stunned him. The gunshot was apparently inflicted after the attack with the blunt instrument.
A fragment of a bloodstained whig found in the apartment suggests the assailant wore a whig in order to disguise himself in the event he was caught in a surveillance camera placed in the building’s main entrance. No surveillance camera was positioned over the entrance to Balaji’s apartment.
How did the assailant enter Balaji’s apartment? Did Balaji know the assailant and let him in? Alternatively, did the assailant somehow—perhaps through a contact in the building’s management—obtain a key to the apartment?
All of these questions could probably be easily answered with a proper investigation, but it sounds like the responding officers hastily concluded it was a suicide, and the medical examiner’s office hastily confirmed their initial perception. If good crime scene photographs could be obtained, a decent bloodstain pattern analyst could probably reconstruct what happened to Balaji.
Vernon J. Geberth, a retired Lieutenant-Commander of the New York City Police Department, has written extensively about how homicides are often erroneously perceived to be suicides by responding officers. The initial perception of suicide at a death scene often results in a lack of proper analysis. His essay The Seven Major Mistakes in Suicide Investigation should be required reading of every police officer whose job includes examining the scenes of unattended deaths.
However, judging by his mother’s testimony, Suchir Balaji’s death was obviously a murder staged to look like a suicide. Someone in a position of power decided it was best to perform only the most cursory investigation and to rule the manner of death suicide based on the mere fact that the pistol was purportedly found in the victim’s hand.
Readers who are interested in learning more about this kind of crime will find it interesting to watch my documentary film in which I examine two murders that were staged to look like suicides. Incidentally, the film is now showing in the Hollywood North International Film Festival. Please click on the image below to watch the film.
If you don’t have a full forty minutes to spare to watch the entire picture, please consider devoting just one second of your time to click on the vote button. Many thanks!
Subscribe to Courageous Discourse™ with Dr. Peter McCullough & John Leake.
For the full experience, upgrade your subscription.
-
Business2 days ago
Freeland and Carney owe Canadians clear answer on carbon taxes
-
Brownstone Institute2 days ago
The Deplorable Ethics of a Preemptive Pardon for Fauci
-
Business2 days ago
Liberals to increase CBC funding to nearly $2 billion per year
-
Business2 days ago
Carney says as PM he would replace the Carbon Tax with something ‘more effective’
-
Daily Caller2 days ago
Biden Pardons His Brother Jim And Other Family Members Just Moments Before Trump’s Swearing-In
-
illegal immigration2 days ago
Trump to declare national emergency on border, issue executive orders
-
Business2 days ago
UK lawmaker threatens to use Online Safety Act to censor social media platforms
-
Daily Caller1 day ago
Trump Takes Firm Stand, Exits Paris Agreement Again