
The War on Canadian Voters Is Online
Last week, Canada’s intelligence agencies flagged a disinformation campaign on WeChat aimed at Mark Carney, with posts reaching up to three million views. Originating from an account tied to the Chinese Communist Party, the posts cast Carney as a “rock-star economist” and lauded his toughness on Trump—praise that officials say masked a calculated attempt to sway Chinese-Canadian voters and stir political blowback. Meanwhile, AI-generated videos of Carney endorsing cryptocurrency scams made the rounds on Facebook. There was also a doctored image that went viral, which depicted Carney on a beach alongside Tom Hanks and Ghislaine Maxwell, a convicted sex trafficker and Jeffrey Epstein’s former girlfriend, peddling the conspiracy theory that he’s part of a shadowy global cabal.
As Canada’s federal election approaches, I’ve been closely monitoring the online landscape as part of my work as the head of trust and safety at Bluesky. Historically, Canada has a clean online political environment. For a long time, Canadians mostly engaged with credible news outlets and elected officials. But that has shifted. The 2022 trucker convoy protests marked a turning point—signalling a growing fragmentation in where people get their information.
Unlike in the U.S., where Russia has launched sweeping campaigns to sow division, Canada’s disinformation landscape is less centralized. Canadians’ trust in traditional institutions is waning, and fringe personalities and alt-media platforms have stepped into the vacuum, creating a society where many Canadians no longer encounter—or believe in—the same basic facts. Put simply, the disinformation now circulating across the country is changing how Canadians determine what’s true. With a federal election just weeks away, that shift could play a major role in shaping the outcome.
When I began working for the Canadian government in 2005, the field of digital “trust and safety”—the very thing I now dedicate my career to—didn’t even exist. While at Global Affairs Canada during the Arab Spring, I got a crash course in how digital platforms could be used to both promote democratic values and spread propaganda. I watched as social media empowered protest movements, only to see those same tools hijacked by the authoritarian regimes they opposed, as well as extremist groups like ISIS.
In 2019, I left Global Affairs Canada and joined Twitter as an Information Operations analyst in its Trust and Safety team. At the time, we were reeling from the Cambridge Analytica scandal and Russian interference in the 2016 U.S. election. My job was to investigate nation-states that were manipulating the platform to interfere in political processes or suppress opposition.
The general public tends to focus on disinformation from foreign actors—or, when one country tries to mislead another’s citizens—but the vast majority of disinformation happens domestically. At Twitter, I often saw governments running coordinated bot networks, setting up fake websites or filing bogus copyright claims to silence critics. In one case during the Tanzanian election in 2020, we uncovered a government-run operation that used thousands of fake Twitter accounts to silence opposition voices.
First, the government copied images and posts from real domestic activists and critics. Then, they uploaded that stolen material to fake websites, which were made to look legitimate and backdated to make it look like the real content had originated there. Using the army of fake accounts this government had purchased—designed to look like everyday users—they flooded Twitter with copyright complaints, falsely claiming the activists were stealing content. Because the backdated websites appeared to support the claims, many of the takedown requests were successful before we detected what was going on. The goal was to get rid of legitimate content that was critical of their regime—surgical suppression, timed right before voters went to the polls.
Evidently, authoritarian governments expend a huge amount of resources shaping public opinion inside their own borders. Many don’t just own the media these days—they also fund armies of trolls to flood online forums, repost talking points and drown out dissent. In some countries, civil servants are literally paid to do one task: post online in favour of the government all day. It’s noisy, relentless and hard to combat.
It’s chilling how effective repetition is. Research shows it takes about six exposures for a message to sink in. So, if someone keeps saying “the election was stolen,” even without evidence, some people begin to absorb it as truth. Worse, if you try to debunk it too aggressively, it can have a backlash effect—people double down on their beliefs when they feel they’re being attacked.
Another challenge was figuring out who was knowingly spreading disinformation on behalf of a government or non-state actor, and who was simply being manipulated by those entities. Put simply, misinformation is information that’s false. Disinformation, on the other hand, is false information shared with the intent to deceive. One widely used framework for making that distinction is called ABC: Actor, Behaviour, Content. You have to analyze all three before labeling something as disinformation. If you find strong indicators that either the actor is inauthentic or the behaviour suggests coordinated manipulation, then you treat the associated content as disinformation.
I stayed at Twitter until early 2024, by which time the company—now under Elon Musk’s control—had changed dramatically from the one I originally joined. That’s when I made the move to Bluesky. In many ways, Bluesky feels like a spiritual successor to the early days of Twitter: decentralized, community-driven and focused on giving users more control over their online experience.
Lately, disinformation narratives have tended to fixate on whichever public figures are making headlines, like Mark Carney. They also show up in more subtle ways, such as through travel influencers who, sometimes unknowingly, participate in whitewashing campaigns—for example, promoting Chinese state-sponsored tours to Xinjiang to refute widespread human rights abuses—then share that content with Canadian audiences without context or fact-checking. Add to that the hyper-partisan content from influencers like Joe Rogan, who present unchecked claims under the guise of “just asking questions,” and you get a media environment where even casual browsing can expose people to a highly distorted version of reality.
As a consequence, we’ve also seen the rise of domestic fringe figures like Romana Didulo, the self-proclaimed “Queen of Canada,” who promotes QAnon conspiracies and anti-vaccine propaganda on Telegram. These aren’t just harmless cranks. Some Canadians who’ve followed alternative voices like Didulo have ended up losing their homes or racking up massive debts—often as a direct result of buying into the conspiracies they promote. For instance, several Didulo followers stopped paying mortgages after Didulo declared debts "abolished," and supporters have contributed large sums to Didulo’s caravan, even while they faced foreclosure. That kind of vulnerability—where people feel so alienated they’ll believe anything—is what worries me most heading into this upcoming election.
At Bluesky, we determine if something is disinformation using the ABC model. When we have clear evidence that someone is deliberately pushing falsehoods to manipulate the public, that’s when we step in.
During recent elections in other countries, for example, we’ve adopted a light-touch approach. We remove content that clearly misleads voters, such as false information about polling times or locations, as well as posts that threaten election workers. But we don’t take down content that questions election results or claims the process was rigged. Instead, we apply labels: “misinformation” when we can confidently verify something is false, and “rumour” when the facts are still unclear. The goal is to inform users without claiming the authority to decide what’s true.
We’ve seen what happens when people think digital platforms are overreaching. Heavy-handed moderation can backfire, fuelling conspiracy theories and deepening mistrust in institutions. When questionable content—especially on sensitive health topics—is removed or flagged with disclaimers, some people interpret it as censorship by platforms or governments, which only strengthens their commitment to that original questionable content. Take the current measles outbreak in Canada and the U.S.—even when presented with clear, evidence-based information about vaccine efficacy, some parents are doubling down on anti-vaccine beliefs, including those who’ve lost children to the disease.
Our hope is that by empowering users with information—and letting them choose fact-checking services they trust—we can build a healthier online space. One example: an independent platform called Lead Stories recently got funding to launch their own fact-checking service on Bluesky. They’re not affiliated with us, but users can choose to subscribe to their moderation tool. That’s the power of decentralization.
Beyond the efforts of platforms like Bluesky, what actually works when it comes to fighting disinformation? In my experience, the most effective solutions happen at the societal level. Take Finland, for example—they’ve made major investments in media literacy education, largely because they share a border with Russia and are constantly targeted by disinformation campaigns. Teaching people how to assess what they’re seeing—asking simple questions like “Who made this?” or “Why am I seeing it?”—goes much further than anything a moderation team can do, whether it’s a small platform like Bluesky or a tech behemoth like Meta or TikTok.
The tools are out there—we just need to make sure people know how to use them. Because in the end, the most powerful counter to disinformation isn’t a tech platform or a government program. It’s still the human brain.
—As told to Ali Amad
Aaron Rodericks is the head of trust and safety at Bluesky.