AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Election at Scale
The 2025 federal election was the first Canadian national election where AI-generated content operated at documented scale across multiple vectors — fabricated images, generated articles, and bot amplification — simultaneously. The Carney deepfake fraud campaign (documented separately) targeted financial exploitation, while this broader pattern involved manufacturing false political narratives, fabricating associations between politicians and disgraced figures, and deploying automated amplification. The foreign interference dimension — confirmed by SITE Task Force public disclosure during the active election period — involved state-linked actors using AI tools to target specific Canadian communities.
Narrative
Canada’s 2025 federal election was the first Canadian national election where AI-generated content and automated amplification operated at documented scale across multiple simultaneous vectors.
The Atlantic Council’s DFRLab identified highly active bot-like accounts on X that amplified political content in a spam-like manner ahead of the April 28 election, frequently replying to posts from federal parties and their leaders. A Financial Times analysis revealed a coordinated network of suspicious accounts, with approximately 80% of analyzed posts critical of Liberal leader Mark Carney and most posts favouring Conservative leader Pierre Poilievre. The pattern was consistent with coordinated inauthentic behaviour designed to distort perceived political sentiment.
AI-generated fabricated images were created and circulated to manufacture false political associations. These included deepfake composite images depicting Carney with Jeffrey Epstein swimming in a pool and Carney dining with Ghislaine Maxwell — images designed to seed conspiracy narratives about Carney’s associations. A deepfake video manipulated authentic footage of a CBC interview to make it appear Carney was making controversial statements. These fabrications spread through both bot amplification and organic sharing, with the conspiracy narratives gaining traction across multiple platforms.
A website called “Pierre Poilievre News” published AI-generated articles filled with unverified information presented as legitimate political journalism. At the end of March 2025, a fabricated claim from this site — asserting that Poilievre’s personal fortune exceeded $20 million — spread widely on social media. The site produced content designed to appear as authentic political reporting while being generated by AI without editorial verification.
Canada’s SITE Task Force made the significant step of publicly disclosing foreign interference during the active election period. The disclosure highlighted activity by a WeChat account (Youli-Youmian) linked to the Chinese Communist Party’s Central Political and Legal Affairs Commission, with coordinated inauthentic behaviour and manipulated amplification tactics targeting Canadian-Chinese communities. The SITE observation confirmed that AI-enhanced social engineering tools are being used by state-linked actors to target specific Canadian diaspora communities during elections.
The Canadian Centre for Cyber Security’s 2025 update on cyber threats to the democratic process had assessed before the election that AI was improving the personalization and persuasiveness of social engineering attacks. The election confirmed this assessment: the combination of AI-generated images, AI-written articles, and automated bot amplification created a multi-layered disinformation environment that was qualitatively different from previous Canadian elections.
This record documents the broader AI-enabled election interference pattern. The specific Carney deepfake fraud campaign — which used AI to impersonate the Prime Minister for financial scams — is documented separately in a dedicated incident record.
Harms
Coordinated bot networks on X amplified political content in a spam-like manner ahead of the federal election, with a Financial Times analysis revealing a network of suspicious accounts exhibiting bot-like behaviour — approximately 80% of analyzed posts were critical of Mark Carney while most favoured Pierre Poilievre, distorting perceived political sentiment.
AI-generated fabricated images — including deepfakes depicting Mark Carney with Jeffrey Epstein and Ghislaine Maxwell — were created and circulated to manufacture false associations, seeding conspiracy narratives that spread through both bot amplification and organic sharing.
An AI-generated articles website ('Pierre Poilievre News') published fabricated content including a false claim that Poilievre's personal fortune exceeded $20 million, which spread widely on social media. The site published AI-generated articles filled with unverified information presented as legitimate political journalism.
Canada's SITE Task Force publicly disclosed foreign interference during the active election period, including a WeChat account linked to the Chinese Communist Party's Central Political and Legal Affairs Commission, with coordinated inauthentic behavior and manipulated amplification tactics targeting Canadian-Chinese communities.
Affected Populations
- Canadian voters across all parties
- Canadian-Chinese communities targeted by foreign interference
- Canadian politicians and public figures whose likenesses were fabricated
- Canadian media organizations whose credibility was exploited
Entities Involved
Administered the 2025 federal election; Elections Canada directed voters to official information sources and published guidance on disinformation
Canadian Centre for Cyber Security published 2025 update on cyber threats to democratic process; SITE Task Force disclosed foreign interference during the active election period
Platform where bot networks amplified political content and AI-generated deepfakes circulated with limited moderation
Facebook platform where AI-generated conspiracy content circulated; Meta's Canadian news ban left an information vacuum exploited by fabricated content
Responses & Outcomes
Published 2025 update on cyber threats to Canada's democratic process, assessing AI-enhanced threats
Published public guidance on resisting disinformation during the election period
AI System Context
Multiple AI systems involved: generative image tools created fabricated photographs (Carney/Epstein, Carney/Maxwell composites); AI text generation produced articles for fake news sites; automated bot accounts (which may use AI for content generation and engagement patterns) amplified political content at scale on X. The CCCS assessed that AI is improving the personalization and persuasiveness of social engineering attacks targeting Canadian democratic processes.
Preventive Measures
- Require platforms to implement heightened AI-generated content detection and labeling during federally declared election periods
- Mandate transparency reporting from platforms on bot network detection and removal during Canadian elections
- Develop rapid-response mechanisms for addressing AI-generated election disinformation, potentially including expanded authority to compel platform action during election periods
- Require AI-generated content provenance standards (C2PA or equivalent) for political content distributed on platforms operating in Canada
- Fund public media literacy campaigns specifically addressing AI-generated political content ahead of election periods
Materialized From
Related Records
- carney-deepfake-election-scam related
Taxonomy
Sources
- Bot-like activity targets Canadian political parties and their leaders ahead of election
- How social media shaped the 2025 Canadian election
- Surprises and old patterns: AI and misinformation in the 2025 federal election campaign
- Cyber Threats to Canada's Democratic Process: 2025 Update
- The AI Threat to Canadian Democracy: Fighting for Digital Sovereignty
- Resisting disinformation during an election
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |