5_Deepfakes
This illustration was created by Maclean’s art director Anna Minzhulina using the generative AI image program Imagine. Minzhulina spent weeks feeding prompts into the program, inspired by the essay.

Political deepfakes will spread confusion and misinformation

The more confusion these deepfakes sow, the more people will accuse each other of faking everything. You’re not going to believe what you see. You’re going to disbelieve what you don’t like. 
BY ROBERT W. GEHL

Robert W. Gehl is an associate professor of communication and media studies at York University and Ontario Research Chair in Digital Governance for Social Justice. 


For the past 10 to 15 years, social media bots have been the primary way misinformation is distributed. Often, state actors use these bots, either domestically or to shape international opinion. In the run-up to the 2016 American election, for example, Russia set up Twitter bots pretending to be Black Lives Matter activists—they told people to stay home from the polls, that voting would be pointless. Black Americans are a reliable Democratic voting bloc, and since Russia was backing Trump, the idea was to suppress African American votes. In 2022, when people were protesting the repression of the Uyghurs during the Beijing Olympics, China deployed bots to present a happy facade. The use of this technology is a longstanding tactic—and artificial intelligence will only help this misinformation spread.

Today, researchers are finding ChatGPT-powered bots on X (formerly Twitter), as well as deepfakes—artificially created media objects that feature real people doing fake things. Deepfakes can believably imitate people like Trudeau, Zelenskyy or Biden, manipulating their images to show them doing and saying things they never did. After the invasion of Ukraine, for example, there was a deepfake video of Volodymyr Zelenskyy going around Facebook saying Russia should be running the country and Ukraine should surrender. It wasn’t really Zelenskyy, but some people—political scientists call them low-information voters—are susceptible to that kind of trickery.

Social media platforms sell advertising around viral messages, so they’re designed to encourage liking, sharing and spreading—not double-checking. That’s not a new problem, but AI will intensify it. If you’re doing the social media hustle and trying to get more likes and views, that used to take time, labour, imagination. Now we have machines that produce this content for us. As this technology improves, the algorithms to create videos will need less computational power, and people will be able to make them at a faster pace. These bad actors can flood the internet with all kinds of low-value content to get clicks and impressions, creating enough confusion and doubt that it’s hard to tell what’s real and what’s fake. That can be very damaging during election periods, where a candidate can win if they can swing small percentages in particular ridings or districts. To work on those marginal numbers sometimes means deluging social media with misinformation on a massive scale.

Meanwhile, deepfakes will get better, and more people will be fooled by them. Right now, deepfakes produce detectable visual or auditory artifacts, like weird reflections or something funny with a hairline. But if you’re very skilled with these systems, you can go in and modify these artifacts. There’s an arms race between the people developing algorithms to make deepfakes more realistic and the people developing algorithms to detect them.The more confusion these deepfakes sow, the more people will accuse each other of faking everything. You’re not going to believe what you see. You’re going to disbelieve what you don’t like, because you can declare that it was fake to begin with.

This erosion of trust will spread beyond social media. Some corporations, for example, could leverage work-from-home technologies to create misleading messages. If we enter a metaverse-style world in which we’re meeting in virtual environments, then everything, by definition, will be digitized and therefore easily manipulated. This environment would be more intense, immersive and artificial than algorithmic-driven, corporate social media. Whoever is constructing the environment controls it. Another variation we may see is augmented reality—what Google Glass was trying to do a few years back—where people wear glasses with projected computer images that overlay the environment around them with things like translations or mapping. We can easily imagine, because this is precisely the point of the system, the inclusion of manipulative or politically charged messages.

We’re not looking at a radical break with the past. AI will be embedded in the flawed systems we already have, and it will intensify those flaws. If we’re deciding everything we don’t like is generated by AI, and everything we do like is authentic and real, that’s going to ramp up polarization. Who benefits from this? When we have chaos and a lack of trust, the real beneficiaries are people in power. We spend all our energy trying to suss out what’s fake and what’s real, and we’re not spending the time holding people in power to account. If we’re constantly arguing over whether climate change is real or the exact causes of wildfires, we’re not changing the system that needs changing, like the fossil fuel industry. Misinformation will become as much of a distraction as it is a danger. That’s the world we’re building now.


We reached out to Canada’s top AI thinkers in fields like ethics, health and computer science and asked them to predict where AI will take us in the coming years, for better or worse. The results may sound like science fiction—but they’re coming at you sooner than you think. To stay ahead of it all, read the other essays that make up our AI cover story, which was published in the November 2023 issue of Maclean’s. Subscribe now.

Cover_1123_DRE