AI Risks to Election and Information Integrity in Canada
AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.
Generative AI is creating concrete threats to the integrity of Canadian elections at both the federal and provincial levels. During the 2025 federal election, AI-generated deepfake videos of Prime Minister Mark Carney reached millions of viewers on TikTok, Facebook, and X. Over 40 Facebook pages ran fraudulent investment scams using AI-generated likenesses of Carney and Dragon's Den personalities. Academic analysis documented the prevalence and platform dynamics of election deepfakes.
Canada's intelligence agencies have assessed the threat as significant and growing. The Communications Security Establishment's 2023 update on cyber threats to Canada's democratic process identified generative AI as making it easier for state and non-state actors to produce convincing disinformation. The Hogue Commission's final report on foreign interference identified AI-enabled disinformation as part of the broader threat landscape. CSE noted that the barrier to creating high-quality synthetic content has dropped substantially.
The legislative and institutional response has not kept pace. The Canada Elections Act was drafted before generative AI existed. While it prohibits certain misleading communications, it does not address synthetic media. The Chief Electoral Officer proposed targeted amendments in November 2024 but no legislation has been introduced. Elections Canada lacks dedicated technical capacity for synthetic media detection.
At the provincial level, Quebec's Chief Electoral Officer (DGEQ) has publicly identified AI as a serious threat to the October 2026 provincial election while acknowledging his institution's limited capacity to respond. Bill 98, adopted in May 2025, created an offense for knowingly spreading false election information with penalties up to $60,000 — but the DGEQ concedes that prosecution under the criminal standard of proof is extremely difficult. Élections Québec received complaints from citizens who obtained incorrect election information from commercial AI chatbots during municipal elections.
The Commission de l'éthique en science et en technologie (CEST) has documented that AI-generated deepfakes disproportionately target women through non-consensual pornographic content, potentially discouraging their political participation — adding a gendered dimension to the election integrity hazard.
The pattern is consistent across jurisdictions: institutional threat assessments identify AI disinformation as significant, but the governance response — legislative frameworks, detection capacity, platform obligations — lags behind the capability that enables the threat.
Major platforms have implemented election integrity policies, including labeling requirements for AI-generated content, restrictions on political advertising, and partnerships with fact-checking organizations. Some AI-generated deepfakes during the 2025 election were identified and labeled by platforms and journalists relatively quickly. The debate centers on whether voluntary platform measures and existing election law provide adequate protection, or whether AI-specific electoral provisions are needed.
Materialized Incidents
- AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Election
- AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Target 2025 Federal Election
- White House Posted AI-Altered Video Making Ottawa Senators Captain Appear to Say Anti-Canadian Slurs
- AI Face-Swap Video Falsely Showing Ghislaine Maxwell Walking Free in Quebec City Went Viral with 7 Million Views
- PRC Spamouflage Campaigns Used AI-Generated Deepfakes to Target Canadian Politicians and Critics
- Russia's Doppelganger Network Used AI-Generated Content to Target Canadian Political Discourse
Harms
AI-generated deepfake videos of Canadian political figures reached millions of viewers during the 2025 federal election. CSE and CSIS assessed that foreign state actors — particularly Russia and China — have used or are likely to use AI-generated content to interfere with Canadian democratic processes.
Canadian electoral institutions and social media platforms lack technical capacity and legal authority to detect or counter AI-generated political disinformation at scale. The Canada Elections Act does not specifically address AI-generated content.
Evidence
5 reports
- Cyber Threats to Canada's Democratic Process: 2023 Update Primary source
CSE identifies AI deepfakes as significant threat to Canadian elections
- Final Report of the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions Primary source
Hogue Commission identified AI-enabled disinformation as part of foreign interference threat
- Deepfakes in the 2025 Canadian Election: Prevalence, Partisanship, and Platform Dynamics Primary source
Academic analysis of deepfake prevalence during 2025 Canadian federal election
- Artificial intelligence: the Quebec electoral officer calls for better legislative oversight Primary source
DGEQ acknowledges AI threats and limited institutional capacity
-
Deepfake video of PM Carney reached millions of viewers
Record details
Responses & Outcomes
Published updated cyber threats assessment identifying AI deepfakes as significant threat to Canadian democratic processes
Published report documenting AI risks to democratic participation including gendered deepfake harassment
Chief Electoral Officer proposed targeted amendments to the Canada Elections Act to address synthetic media
Supported adoption of Bill 98 creating offense for knowingly spreading false election information
DGEQ publicly warned voters against relying on AI chatbots for election information
Policy Recommendationsassessed
Amend the Canada Elections Act to explicitly address AI-generated synthetic media used to mislead voters
Elections Canada (Nov 1, 2024)Develop technical capacity within Elections Canada and Élections Québec for synthetic media detection
Communications Security Establishment (Dec 6, 2023)Require AI platform operators to label, restrict, or redirect election-related queries to official sources during election periods
Élections Québec (Mar 8, 2026)Establish cross-agency coordination between CSE, CSIS, and Elections Canada for real-time AI disinformation threat monitoring
Public Inquiry into Foreign Interference (Hogue Commission) (Jan 28, 2025)Strengthen enforcement mechanisms for Quebec's Bill 98 beyond the criminal standard of proof
Élections Québec (Mar 8, 2026)Editorial Assessment assessed
AI-generated disinformation appeared at scale during the 2025 Canadian federal election. Canada's intelligence agencies assess the threat as significant and growing. Neither federal nor provincial electoral law was designed to address synthetic media, and electoral institutions lack technical detection capacity — creating a concrete and widening gap between the threat and institutional preparedness, with Quebec's October 2026 election as the next high-stakes test.
Entities Involved
Related Records
- AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Target 2025 Federal Electionrelated
- AI Content Moderation Systems Reported to Disproportionately Remove French, Indigenous, and Racialized Contentrelated
- AI-Generated Wildfire Images Spread Emergency Misinformation During British Columbia's 2025 Fire Seasonrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication consolidating federal and Quebec election integrity hazards |