Connect with us

Artificial Intelligence

Save Taylor Swift. Stop deep-fake porn: Peter Menzies

Published

8 minute read

Photo by Michael Hicks, via Flickr

From the MacDonald Laurier Institute

By Peter Menzies

Tweak an existing law to ensure AI-generated porn that uses the images of real people is made illegal.

Hey there, Swifties.

Stop worrying about whether your girl can make it back from a tour performance in Tokyo in time to cheer on her boyfriend in Super Bowl LVIII.

Please shift your infatuation away from  your treasured superstar’s romantic attachment to Kansas City Chiefs’ dreamy Travis Kelce and his pending battle with the San Francisco 49ers. We all know Taylor Swift’ll be in Vegas for kickoff on Feb. 11. She’ll get there. Billionaires always find a way. And, hey, what modern woman wouldn’t take a 27-hour round trip flight to hang out with a guy ranked #1 on People’s sexiest men in sports list?

But right now, Swifties, Canada needs you to concentrate on something more important than celebrity canoodling. Your attention needs to be on what the nation’s self-styled feminist government should be doing to protect Swift (and all women) from being “deep-faked” into online porn stars.

Because that’s exactly what happened to the multiple Grammy Award-winner last week when someone used artificial intelligence to post deep-fakes (manipulated images of bodies and faces) of her that spread like a coronavirus across the internet. Swift’s face was digitally grafted onto the body of someone engaged in sexual acts/poses in a way that was convincing enough to fool some into believing that it was Swift herself. Before they were contained, the deep-fakes were viewed by millions. The BBC reported that one single “photo” had accumulated 47 million views.

For context, a 2019 study by Deeptrace Labs identified almost 15,000 deep-fakes on streaming and porn sites — twice as many as the previous year — and concluded that 96 per cent were recreations of celebrity women. Fair to assume the fakes have continued to multiply like bunnies in spring time.

In response to the Swift images, the platform formerly known as Twitter — X — temporarily blocked searches for “Taylor Swift” as it battled to eliminate the offending depictions which still found ways to show up elsewhere.

X said it was “actively removing” the deep-fakes while taking “appropriate actions” against those spreading them.

Meta said it has “strict policies that prohibit this kind of behavior” adding that it also takes “several steps to combat the spread of AI deepfakes.”

Google Deepmind launched an initiative last summer to improve detection of AI-generated images but critics say it, too, struggles to keep up.

While the creation of images to humiliate women goes back to the puerile pre-internet writing of “for a good time call” phone numbers on the walls of men’s washrooms, the use of technology to abuse women shows how difficult it is for governments to keep pace with change. The Americans are now pondering bipartisan legislation to stop this, the Brits are boasting that such outrageousness is already covered by their Online Safety Act and Canada so far ….  appears to be doing nothing.

Maybe that’s because it thinks that Section 162 of the Criminal Code, which bans the distribution or transmission of intimate images without permission of the person or people involved, has it covered.

To wit, “Everyone who knowingly publishes, distributes, transmits, sells, makes available or advertises an intimate image of a person knowing that the person depicted in the image did not give their consent to that conduct, or being reckless as to whether or not that person gave their consent to that conduct, is guilty of an indictable offence and liable to imprisonment for a term of not more than five years.”

Maybe Crown prosecutors are confident they can talk judges into interpreting that legislation in a fashion that brings deep-fakes into scope. It’s not like eminent justices haven’t previously pondered legislation — or the Charter for that matter— and then “read in” words that they think should be there.

Police in Winnipeg recently launched an investigation in December when AI-generated fake photos were spread. And a Quebec man was convicted recently when he used AI to create child porn — a first.

But anytime technology overrides the law, there’s a risk that the former turns the latter into an ass.

Which means there’s a real easy win here for the Justin Trudeau government which, when it comes to issues involving the internet, has so far behaved like a band of bumbling hillbillies.

The Online Streaming Act, in two versions, was far more contentious than necessary because those crafting it clearly had difficulty grasping the simple fact that the internet is neither broadcasting nor a cable network. And the Online News Act, which betrayed a complete misunderstanding of how the internet, global web giants and digital advertising work, remains in the running for Worst Legislation Ever, having cost the industry it was supposed to assist at least $100 million and helped it double down on its reputation for grubbiness.

Anticipated now in the spring after being first promised in 2019, the Online Harms Act has been rattling around the Department of Heritage consultations since 2019. Successive heritage ministers have failed to craft anything that’ll pass muster with the Charter of Rights and Freedoms so the whole bundle is now with Justice Minister Arif Virani, who replaced David Lametti last summer.

The last thing Canada needs right now is for the PMO to jump on the rescue Taylor Swift bandwagon and use deep-fakes as one more excuse to create, as it originally envisioned, a Digital Safety czar with invasive ready, fire, aim powers to order take downs of anything they find harmful or hurtful. Given its recent legal defeats linked to what appears to be a chronic inability to understand the Constitution, that could only end in yet another humiliation.

So, here’s the easy win. Amend Section 162 of the Criminal Code so that the use of deep-fakes to turn women into online porn stars against their will is clearly in scope. It’ll take just a few words. It’ll involve updating existing legislation that isn’t the slightest bit contentious. Every party will support it. It’ll make you look good. Swifties will love you.

And, best of all, it’ll actually be the right thing to do.

Peter Menzies is a senior fellow with the Macdonald-Laurier Institute, past vice-chair of the CRTC and a former newspaper publisher.

Todayville is a digital media and technology company. We profile unique stories and events in our community. Register and promote your community event for free.

Follow Author

Artificial Intelligence

UK Police Pilot AI System to Track “Suspicious” Driver Journeys

Published on

logo

By

AI-driven surveillance is shifting from spotting suspects to mapping ordinary life, turning everyday travel into a stream of behavioral data

Police forces across Britain are experimenting with artificial intelligence that can automatically monitor and categorize drivers’ movements using the country’s extensive number plate recognition network.
Internal records obtained by Liberty Investigates and The Telegraph reveal that three of England and Wales’s nine regional organized crime units are piloting a Faculty AI-built program designed to learn from vehicle movement data and detect journeys that algorithms label “suspicious.”
For years, the automatic number plate recognition (ANPR) system has logged more than 100 million vehicle sightings each day, mostly for confirming whether a specific registration has appeared in a certain area.
The new initiative changes that logic entirely. Instead of checking isolated plates, it teaches software to trace entire routes, looking for patterns of behavior that resemble the travel of criminal networks known for “county lines” drug trafficking.
The project, called Operation Ignition, represents a change in scale and ambition.
Unlike traditional alerts that depend on officers manually flagging “vehicles of interest,” the machine learning model learns from past data to generate its own list of potential targets.
Official papers admit that the process could involve “millions of [vehicle registrations],” and that the information gathered may guide future decisions about the ethical and operational use of such technologies.
What began as a Home Office-funded trial in the North West covering Merseyside, Greater Manchester, Cheshire, Cumbria, Lancashire, and North Wales has now expanded into three regional crime units.
Authorities describe this as a technical experiment, but documents point to long-term plans for nationwide adoption.
Civil liberty groups warn that these kinds of systems rarely stay limited to their original purpose.
Jake Hurfurt of Big Brother Watch said: “The UK’s ANPR network is already one of the biggest surveillance networks on the planet, tracking millions of innocent people’s journeys every single day. Using AI to analyse the millions of number plates it picks up will only make the surveillance dragnet even more intrusive. Monitoring and analysing this many journeys will impact everybody’s privacy and has the potential to allow police to analyse how we all move around the country at the click of a button.”
He added that while tackling organized drug routes is a legitimate goal, “there is a real danger of mission creep – ANPR was introduced as a counter-terror measure, now it is used to enforce driving rules. The question is not whether should police try and stop gangs, but how could this next-generation use of number plate scans be used down the line?”
The find and profile app was built by Faculty AI, a British technology firm with deep ties to government projects.
The company, which worked with Dominic Cummings during the Vote Leave campaign, has since developed data analysis tools for the NHS and Ministry of Defence.
Faculty recently drew attention after it was contracted to create software that scans social media for “concerning” posts, later used to monitor online debate about asylum housing.
Faculty declined to comment on its part in the ANPR initiative.
Chief constable Chris Todd, chair of the National Police Chiefs’ Council’s data and analytics board, described the system as “a small-scale, exploratory, operational proof of concept looking at the potential use of machine learning in conjunction with ANPR data.”
He said the pilot used “a very small subset of ANPR data” and insisted that “data protection and security measures are in place, and an ethics panel has been established to oversee the work.”
William Webster, the Biometrics and Surveillance Camera Commissioner, said the Home Office was consulting on new legal rules for digital and biometric policing tools, including ANPR.
“Oversight is a key part of this framework,” he said, adding that trials of this kind should take place within “a ‘safe space’” that ensures “transparency and accountability at the outset.”
A Home Office spokesperson said the app was “designed to support investigations into serious and organised crime” and was “currently being tested on a small scale” using “a small subset of data collected by the national ANPR network.”
From a privacy standpoint, the concern is not just the collection of travel data but what can be inferred from it.
By linking millions of journeys into behavioral models, the system could eventually form a live map of how people move across the country.
Once this analytical capacity becomes part of routine policing, the distinction between tracking suspects and tracking citizens may blur entirely.
Continue Reading

Alberta

Schools should go back to basics to mitigate effects of AI

Published on

From the Fraser Institute

By Paige MacPherson

Odds are, you can’t tell whether this sentence was written by AI. Schools across Canada face the same problem. And happily, some are finding simple solutions.

Manitoba’s Division Scolaire Franco-Manitobaine recently issued new guidelines for teachers, to only assign optional homework and reading in grades Kindergarten to six, and limit homework in grades seven to 12. The reason? The proliferation of generative artificial intelligence (AI) chatbots such as ChatGPT make it very difficult for teachers, juggling a heavy workload, to discern genuine student work from AI-generated text. In fact, according to Division superintendent Alain Laberge, “Most of the [after-school assignment] submissions, we find, are coming from AI, to be quite honest.”

This problem isn’t limited to Manitoba, of course.

Two provincial doors down, in Alberta, new data analysis revealed that high school report card grades are rising while scores on provincewide assessments are not—particularly since 2022, the year ChatGPT was released. Report cards account for take-home work, while standardized tests are written in person, in the presence of teaching staff.

Specifically, from 2016 to 2019, the average standardized test score in Alberta across a range of subjects was 64 while the report card grade was 73.3—or 9.3 percentage points higher). From 2022 and 2024, the gap increased to 12.5 percentage points. (Data for 2020 and 2021 are unavailable due to COVID school closures.)

In lieu of take-home work, the Division Scolaire Franco-Manitobaine recommends nightly reading for students, which is a great idea. Having students read nightly doesn’t cost schools a dime but it’s strongly associated with improving academic outcomes.

According to a Programme for International Student Assessment (PISA) analysis of 174,000 student scores across 32 countries, the connection between daily reading and literacy was “moderately strong and meaningful,” and reading engagement affects reading achievement more than the socioeconomic status, gender or family structure of students.

All of this points to an undeniable shift in education—that is, teachers are losing a once-valuable tool (homework) and shifting more work back into the classroom. And while new technologies will continue to change the education landscape in heretofore unknown ways, one time-tested winning strategy is to go back to basics.

And some of “the basics” have slipped rapidly away. Some college students in elite universities arrive on campus never having read an entire book. Many university professors bemoan the newfound inability of students to write essays or deconstruct basic story components. Canada’s average PISA scores—a test of 15-year-olds in math, reading and science—have plummeted. In math, student test scores have dropped 35 points—the PISA equivalent of nearly two years of lost learning—in the last two decades. In reading, students have fallen about one year behind while science scores dropped moderately.

The decline in Canadian student achievement predates the widespread access of generative AI, but AI complicates the problem. Again, the solution needn’t be costly or complicated. There’s a reason why many tech CEOs famously send their children to screen-free schools. If technology is too tempting, in or outside of class, students should write with a pencil and paper. If ChatGPT is too hard to detect (and we know it is, because even AI often can’t accurately detect AI), in-class essays and assignments make sense.

And crucially, standardized tests provide the most reliable equitable measure of student progress, and if properly monitored, they’re AI-proof. Yet standardized testing is on the wane in Canada, thanks to long-standing attacks from teacher unions and other opponents, and despite broad support from parents. Now more than ever, parents and educators require reliable data to access the ability of students. Standardized testing varies widely among the provinces, but parents in every province should demand a strong standardized testing regime.

AI may be here to stay and it may play a large role in the future of education. But if schools deprive students of the ability to read books, structure clear sentences, correspond organically with other humans and complete their own work, they will do students no favours. The best way to ensure kids are “future ready”—to borrow a phrase oft-used to justify seesawing educational tech trends—is to school them in the basics.

Paige MacPherson

Senior Fellow, Education Policy, Fraser Institute
Continue Reading

Trending

X