AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Election
Deepfakes, bot networks, and AI-generated fake news targeted Canada's 2025 federal election at documented scale.
Canada's 2025 federal election saw AI-generated content and automated amplification operating at documented scale across multiple simultaneous vectors — a qualitative shift from previous Canadian elections.
The Atlantic Council's DFRLab identified highly active bot-like accounts on X that amplified political content in a spam-like manner ahead of the April 28 election, frequently replying to posts from federal parties and their leaders (DFRLab, 2025). A Financial Times investigation separately identified a coordinated network of suspicious accounts favouring Poilievre and attacking Carney. DFRLab's analysis found that approximately 80% of the politically charged spam and misleading narratives from bot-like accounts were directed at the Liberal Party and its leadership (DFRLab, 2025). The pattern was consistent with coordinated inauthentic behaviour that could distort perceived political sentiment.
AI-generated fabricated images were created and circulated to manufacture false political associations. These included an AI-generated image depicting Carney with Jeffrey Epstein in a pool, which appeared on X on January 27, 2025 and was debunked by fact-checkers the following day (CTV News, 2025). A separate AI-generated image depicting Carney dining with Ghislaine Maxwell was documented after the election (earliest appearance May 3, 2025) (CTV News, 2025). These fabrications were designed to seed conspiracy narratives about Carney's associations. A deepfake video manipulated authentic footage of a March 27 press conference to falsely show Carney announcing a ban on vehicles made before 2000, reaching millions of views on TikTok and X (CTV News, 2025). Separately, deepfake videos mimicking CBC news interviews were used to direct viewers to cryptocurrency scam websites — financial fraud documented in the related Carney deepfake record. These fabrications spread through both bot amplification and organic sharing, with conspiracy narratives gaining traction across multiple platforms (DFRLab, 2025).
A website called "Pierre Poilievre News" published AI-generated articles filled with unverified information presented as legitimate political journalism (CTV News, 2025). At the end of March 2025, a fabricated claim from this site — asserting that Poilievre's personal fortune exceeded $25 million — spread widely on social media (CTV News, 2025). The site produced content designed to appear as authentic political reporting while being generated by AI without editorial verification.
Canada's SITE Task Force made the significant step of publicly disclosing foreign interference during the active election period. The disclosure highlighted activity by a WeChat account (Youli-Youmian) linked to the Chinese Communist Party's Central Political and Legal Affairs Commission, with coordinated inauthentic behaviour and manipulated amplification tactics targeting Canadian-Chinese communities. The SITE observation indicated that AI-enhanced social engineering tools were being used by state-linked actors to target specific Canadian diaspora communities during elections.
The Canadian Centre for Cyber Security's 2025 update on cyber threats to the democratic process had assessed before the election that AI was improving the personalization and persuasiveness of social engineering attacks (Canadian Centre for Cyber Security, 2025). The election was consistent with this assessment: the combination of AI-generated images, AI-written articles, and automated bot amplification created a multi-layered disinformation environment that differed from previous Canadian elections in the simultaneous deployment of AI-generated content across multiple vectors (DFRLab, 2025; CTV News, 2025).
This record documents the broader AI-enabled election interference pattern. The specific Carney deepfake fraud campaign — which used AI to impersonate the Prime Minister for financial scams — is documented separately in a dedicated incident record.
Materialized From
Harms
Coordinated bot networks on X amplified political content in a spam-like manner ahead of the federal election. DFRLab's analysis found approximately 80% of the politically charged spam and misleading narratives from bot-like accounts were directed at the Liberal Party and its leadership. A separate Financial Times investigation identified a coordinated network of suspicious accounts favouring Poilievre and attacking Carney. The pattern was consistent with coordinated inauthentic behaviour that could distort perceived political sentiment.
AI-generated fabricated images — including deepfakes depicting Mark Carney with Jeffrey Epstein and Ghislaine Maxwell — were created and circulated to manufacture false associations, seeding conspiracy narratives that spread through both bot amplification and organic sharing.
An AI-generated articles website ('Pierre Poilievre News') published fabricated content including a false claim that Poilievre's personal fortune exceeded $25 million, which spread widely on social media. The site published AI-generated articles filled with unverified information presented as legitimate political journalism.
Canada's SITE Task Force publicly disclosed foreign interference during the active election period, including a WeChat account linked to the Chinese Communist Party's Central Political and Legal Affairs Commission, with coordinated inauthentic behavior and manipulated amplification tactics targeting Canadian-Chinese communities.
Evidence
6 reports
- Cyber Threats to Canada's Democratic Process: 2025 Update Primary source
Assessment that AI is improving personalization and persuasiveness of social engineering attacks
- Bot-like activity targets Canadian political parties and their leaders ahead of election Primary source
Highly active bot-like accounts amplified political content targeting federal parties and leaders
- Surprises and old patterns: AI and misinformation in the 2025 federal election campaign Primary source
AI-generated fabricated images including Carney/Epstein and Carney/Maxwell composites; Pierre Poilievre News AI-generated articles
- How social media shaped the 2025 Canadian election Primary source
DFRLab analysis of how social media shaped the 2025 Canadian election; documented AI-generated content and platform dynamics
-
Government guidance on resisting disinformation during elections; official Canadian response framework
-
Analysis of AI threat to Canadian democracy and digital sovereignty; policy context for election information integrity
Record details
Responses & Outcomes
Published 2025 update on cyber threats to Canada's democratic process, assessing AI-enhanced threats
Published public guidance on resisting disinformation during the election period
Policy Recommendationsassessed
Transparency reporting from platforms on bot network detection and removal during Canadian elections, as implied by DFRLab's documentation of gaps in platform disclosure of automated account activity
DFRLab (Atlantic Council) (Apr 25, 2025)AI-generated content provenance standards (C2PA or equivalent) for political content, consistent with the Canadian Centre for Cyber Security's guidance on content provenance for organizations
Canadian Centre for Cyber Security (Mar 6, 2025)Public media literacy campaigns addressing AI-generated political content, consistent with the government's Digital Citizen Initiative and election-period awareness resources
Democratic Institutions Canada (Apr 1, 2025)Editorial Assessment assessed
The 2025 federal election saw AI-generated content operating at documented scale across multiple vectors — fabricated images, generated articles, and bot amplification — simultaneously (CTV News, 2025; DFRLab (Atlantic Council), 2025). The Carney deepfake fraud campaign (documented separately) targeted financial exploitation, while this broader pattern involved manufacturing false political narratives, fabricating associations between politicians and disgraced figures (CTV News, 2025), and deploying automated amplification (DFRLab (Atlantic Council), 2025). The foreign interference dimension — confirmed by SITE Task Force public disclosure during the active election period — involved state-linked actors using AI tools to target specific Canadian communities (Canadian Centre for Cyber Security, 2025).
Entities Involved
Related Records
- AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Target 2025 Federal Electionrelated
- PRC Spamouflage Campaigns Used AI-Generated Deepfakes to Target Canadian Politicians and Criticsrelated
- Russia's Doppelganger Network Used AI-Generated Content to Target Canadian Political Discourserelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 11, 2026 | Corrected $20M to $25M; clarified 80% attribution to DFRLab vs FT; fixed deepfake CBC interview description; corrected image timeline; softened 'first election' claim; reframed policy recommendations for attribution accuracy |
| v3 | Mar 11, 2026 | Verification upgraded from corroborated to confirmed: Canadian Centre for Cyber Security and Democratic Institutions Canada issued official assessments confirming the threat. |