Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Confirmed Significant

Deepfakes, bot networks, and AI-generated fake news targeted Canada's 2025 federal election at documented scale.

Occurred: March 1, 2025 (month) to April 28, 2025 Reported: April 25, 2025

Canada's 2025 federal election saw AI-generated content and automated amplification operating at documented scale across multiple simultaneous vectors — a qualitative shift from previous Canadian elections.

The Atlantic Council's DFRLab identified highly active bot-like accounts on X that amplified political content in a spam-like manner ahead of the April 28 election, frequently replying to posts from federal parties and their leaders (DFRLab, 2025). A Financial Times investigation separately identified a coordinated network of suspicious accounts favouring Poilievre and attacking Carney. DFRLab's analysis found that approximately 80% of the politically charged spam and misleading narratives from bot-like accounts were directed at the Liberal Party and its leadership (DFRLab, 2025). The pattern was consistent with coordinated inauthentic behaviour that could distort perceived political sentiment.

AI-generated fabricated images were created and circulated to manufacture false political associations. These included an AI-generated image depicting Carney with Jeffrey Epstein in a pool, which appeared on X on January 27, 2025 and was debunked by fact-checkers the following day (CTV News, 2025). A separate AI-generated image depicting Carney dining with Ghislaine Maxwell was documented after the election (earliest appearance May 3, 2025) (CTV News, 2025). These fabrications were designed to seed conspiracy narratives about Carney's associations. A deepfake video manipulated authentic footage of a March 27 press conference to falsely show Carney announcing a ban on vehicles made before 2000, reaching millions of views on TikTok and X (CTV News, 2025). Separately, deepfake videos mimicking CBC news interviews were used to direct viewers to cryptocurrency scam websites — financial fraud documented in the related Carney deepfake record. These fabrications spread through both bot amplification and organic sharing, with conspiracy narratives gaining traction across multiple platforms (DFRLab, 2025).

A website called "Pierre Poilievre News" published AI-generated articles filled with unverified information presented as legitimate political journalism (CTV News, 2025). At the end of March 2025, a fabricated claim from this site — asserting that Poilievre's personal fortune exceeded $25 million — spread widely on social media (CTV News, 2025). The site produced content designed to appear as authentic political reporting while being generated by AI without editorial verification.

Canada's SITE Task Force made the significant step of publicly disclosing foreign interference during the active election period. The disclosure highlighted activity by a WeChat account (Youli-Youmian) linked to the Chinese Communist Party's Central Political and Legal Affairs Commission, with coordinated inauthentic behaviour and manipulated amplification tactics targeting Canadian-Chinese communities. The SITE observation indicated that AI-enhanced social engineering tools were being used by state-linked actors to target specific Canadian diaspora communities during elections.

The Canadian Centre for Cyber Security's 2025 update on cyber threats to the democratic process had assessed before the election that AI was improving the personalization and persuasiveness of social engineering attacks (Canadian Centre for Cyber Security, 2025). The election was consistent with this assessment: the combination of AI-generated images, AI-written articles, and automated bot amplification created a multi-layered disinformation environment that differed from previous Canadian elections in the simultaneous deployment of AI-generated content across multiple vectors (DFRLab, 2025; CTV News, 2025).

This record documents the broader AI-enabled election interference pattern. The specific Carney deepfake fraud campaign — which used AI to impersonate the Prime Minister for financial scams — is documented separately in a dedicated incident record.

Materialized From

Harms

Coordinated bot networks on X amplified political content in a spam-like manner ahead of the federal election. DFRLab's analysis found approximately 80% of the politically charged spam and misleading narratives from bot-like accounts were directed at the Liberal Party and its leadership. A separate Financial Times investigation identified a coordinated network of suspicious accounts favouring Poilievre and attacking Carney. The pattern was consistent with coordinated inauthentic behaviour that could distort perceived political sentiment.

MisinformationAutonomy UnderminedFraud & ImpersonationSignificantPopulation

AI-generated fabricated images — including deepfakes depicting Mark Carney with Jeffrey Epstein and Ghislaine Maxwell — were created and circulated to manufacture false associations, seeding conspiracy narratives that spread through both bot amplification and organic sharing.

MisinformationAutonomy UnderminedFraud & ImpersonationSignificantPopulation

An AI-generated articles website ('Pierre Poilievre News') published fabricated content including a false claim that Poilievre's personal fortune exceeded $25 million, which spread widely on social media. The site published AI-generated articles filled with unverified information presented as legitimate political journalism.

MisinformationAutonomy UnderminedFraud & ImpersonationModeratePopulation

Canada's SITE Task Force publicly disclosed foreign interference during the active election period, including a WeChat account linked to the Chinese Communist Party's Central Political and Legal Affairs Commission, with coordinated inauthentic behavior and manipulated amplification tactics targeting Canadian-Chinese communities.

MisinformationAutonomy UnderminedFraud & ImpersonationSignificantGroup

Evidence

6 reports

  1. Official — Canadian Centre for Cyber Security (Mar 6, 2025)

    Assessment that AI is improving personalization and persuasiveness of social engineering attacks

  2. Academic — DFRLab (Atlantic Council) (Apr 25, 2025)

    Highly active bot-like accounts amplified political content targeting federal parties and leaders

  3. Media — CTV News (Apr 28, 2025)

    AI-generated fabricated images including Carney/Epstein and Carney/Maxwell composites; Pierre Poilievre News AI-generated articles

  4. Academic — DFRLab (Atlantic Council) (Apr 29, 2025)

    DFRLab analysis of how social media shaped the 2025 Canadian election; documented AI-generated content and platform dynamics

  5. Official — Democratic Institutions Canada (Apr 1, 2025)

    Government guidance on resisting disinformation during elections; official Canadian response framework

  6. Media — Open Canada (Sep 1, 2025)

    Analysis of AI threat to Canadian democracy and digital sovereignty; policy context for election information integrity

Record details

Responses & Outcomes

Communications Security Establishmentinstitutional actionActive

Published 2025 update on cyber threats to Canada's democratic process, assessing AI-enhanced threats

Elections CanadaguidanceActive

Published public guidance on resisting disinformation during the election period

Policy Recommendationsassessed

Transparency reporting from platforms on bot network detection and removal during Canadian elections, as implied by DFRLab's documentation of gaps in platform disclosure of automated account activity

DFRLab (Atlantic Council) (Apr 25, 2025)

AI-generated content provenance standards (C2PA or equivalent) for political content, consistent with the Canadian Centre for Cyber Security's guidance on content provenance for organizations

Canadian Centre for Cyber Security (Mar 6, 2025)

Public media literacy campaigns addressing AI-generated political content, consistent with the government's Digital Citizen Initiative and election-period awareness resources

Democratic Institutions Canada (Apr 1, 2025)

Editorial Assessment assessed

The 2025 federal election saw AI-generated content operating at documented scale across multiple vectors — fabricated images, generated articles, and bot amplification — simultaneously (CTV News, 2025; DFRLab (Atlantic Council), 2025). The Carney deepfake fraud campaign (documented separately) targeted financial exploitation, while this broader pattern involved manufacturing false political narratives, fabricating associations between politicians and disgraced figures (CTV News, 2025), and deploying automated amplification (DFRLab (Atlantic Council), 2025). The foreign interference dimension — confirmed by SITE Task Force public disclosure during the active election period — involved state-linked actors using AI tools to target specific Canadian communities (Canadian Centre for Cyber Security, 2025).

Entities Involved

Related Records

Taxonomyassessed

Domain
Elections & Info Integrity
Harm type
MisinformationAutonomy UnderminedFraud & Impersonation
AI pathway
Use Beyond Intended Scope
Lifecycle phase
Deployment

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 11, 2026Corrected $20M to $25M; clarified 80% attribution to DFRLab vs FT; fixed deepfake CBC interview description; corrected image timeline; softened 'first election' claim; reframed policy recommendations for attribution accuracy
v3Mar 11, 2026Verification upgraded from corroborated to confirmed: Canadian Centre for Cyber Security and Democratic Institutions Canada issued official assessments confirming the threat.

Version 2