PRC Spamouflage Campaigns Used AI-Generated Deepfakes to Target Canadian Politicians and Critics
Canada's RRM detected multiple PRC-attributed Spamouflage campaigns (2023–2025) using AI-generated deepfake videos and, in a first for Canada, non-consensual intimate imagery to target Canadian MPs and Chinese-Canadian critics.
Between October 2023 and March 2025, Canada's Rapid Response Mechanism (RRM Canada) detected and publicly attributed multiple Spamouflage campaigns linked to the People's Republic of China that used AI-generated content to target individuals in Canada.
In October 2023, RRM Canada identified a Spamouflage bot network operating across Facebook and X that posted thousands of spam comments linking to likely deepfake videos — digitally modified by artificial intelligence — targeting Canadian Members of Parliament (Global Affairs Canada, 2023; CBC News, 2023). The Australian Strategic Policy Institute's prior research on Spamouflage informed RRM Canada's assessments.
Beginning August 31, 2024, RRM Canada detected a second campaign targeting ten Mandarin-speaking individuals in Canada — commentators, community leaders, and political figures critical of the PRC (Global Affairs Canada, 2024). The campaign generated 100 to 200 new posts per day across X, Facebook, TikTok, and YouTube (Global Affairs Canada, 2024). It used AI to produce deepfake videos posted to YouTube and TikTok, and produced sexually explicit AI-generated deepfake images of one targeted individual — the first documented use of AI-generated non-consensual intimate imagery in a Spamouflage campaign targeting individuals in Canada (Global Affairs Canada, 2024). A similar technique had been previously documented in Spamouflage operations targeting individuals in Australia. The campaign also published home addresses and phone numbers of targets (Global Affairs Canada, 2024). RRM Canada engaged directly with China's embassy regarding the activity.
In March 2025, RRM Canada detected continued Spamouflage activity again targeting Canada-based Chinese-language commentators and their families with AI-doctored videos (Global Affairs Canada, 2025).
All campaigns were attributed to the PRC by RRM Canada.
Materialized From
Harms
Likely AI-modified deepfake videos fabricated to misrepresent Canadian Members of Parliament, distributed at scale across Facebook and X via a bot network.
Sexually explicit AI-generated deepfake images produced of a Canada-based PRC critic — the first documented use of AI-generated non-consensual intimate imagery in a state-attributed influence operation targeting individuals in Canada.
Doxing of targeted individuals — home addresses and phone numbers published alongside AI-generated harassment content — creating conditions that could suppress diaspora political expression in Canada.
Evidence
5 reports
-
Wave 1: Spamouflage network posting AI-assisted comments and fabricated videos targeting Canadian MPs on X, Facebook, YouTube, TikTok
- RRM Canada Detects Spamouflage Campaign Targeting Canada-Based Chinese-Language Commentators Primary source
Wave 2: 100-200 posts/day targeting 10 individuals, AI-generated deepfakes including sexually explicit imagery, doxing
- RRM Canada Detects Second Spamouflage Campaign Targeting Canada-Based Chinese-Language Commentators and Their Families Primary source
Wave 3: AI-doctored videos targeting commentators and their families
-
Spamouflage bot network targeting Canadian MPs confirmed by RRM Canada
-
AI-generated deepfake videos targeting Canada-based China critic, with misaligned facial features
Record details
Responses & Outcomes
Canada's Rapid Response Mechanism detected and publicly attributed three waves of Spamouflage campaigns to the People's Republic of China (October 2023, October 2024, March 2025). RRM Canada engaged directly with China's embassy regarding the second wave.
Public attribution is a documented deterrence mechanism but did not prevent subsequent waves. The second wave (2024) escalated to include AI-generated non-consensual intimate imagery and doxing, suggesting attribution alone is insufficient to halt the campaign.
Editorial Assessment assessed
This is the first documented case of a state actor using AI-generated non-consensual intimate imagery in a foreign interference campaign targeting individuals in Canada (Global Affairs Canada, 2024). The campaigns documented by RRM Canada show AI capabilities applied to influence operations progressing from bot network amplification with likely AI-modified videos (Global Affairs Canada, 2023) to targeted AI-generated deepfakes including sexually explicit content over an 18-month period (Global Affairs Canada, 2024).
Entities Involved
Related Records
- AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Electionrelated
- AI-Generated Non-Consensual Intimate Imageryrelated
- Russia's Doppelganger Network Used AI-Generated Content to Target Canadian Political Discourserelated
- CSE Assesses PRC Likely Uses Machine Learning to Profile Targets Connected to Canadian Democratic Processesrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 11, 2026 | Initial publication |
| v2 | Mar 11, 2026 | Neutrality and factuality review: removed two unverifiable policy recommendation attributions (no tabled SECU report with the cited recommendation found; RRM/GAC attribution conflates operational practice with formal policy recommendation). Narrative facts verified against RRM Canada primary disclosures — no changes needed. |