Risques de l'IA pour l'intégrité électorale et informationnelle au Canada
La désinformation générée par l'IA est apparue à grande échelle lors de l'élection fédérale de 2025. La loi électorale n'encadre pas les médias synthétiques et la capacité de détection est minimale.
Generative AI is creating concrete threats to the integrity of Canadian elections at both the federal and provincial levels. During the 2025 federal election, AI-generated deepfake videos of Prime Minister Mark Carney reached millions of viewers on TikTok, Facebook, and X. Over 40 Facebook pages ran fraudulent investment scams using AI-generated likenesses of Carney and Dragon's Den personalities. Academic analysis documented the prevalence and platform dynamics of election deepfakes.
Canada's intelligence agencies have assessed the threat as significant and growing. The Communications Security Establishment's 2023 update on cyber threats to Canada's democratic process identified generative AI as making it easier for state and non-state actors to produce convincing disinformation. The Hogue Commission's final report on foreign interference identified AI-enabled disinformation as part of the broader threat landscape. CSE noted that the barrier to creating high-quality synthetic content has dropped substantially.
The legislative and institutional response has not kept pace. The Canada Elections Act was drafted before generative AI existed. While it prohibits certain misleading communications, it does not address synthetic media. The Chief Electoral Officer proposed targeted amendments in November 2024 but no legislation has been introduced. Elections Canada lacks dedicated technical capacity for synthetic media detection.
At the provincial level, Quebec's Chief Electoral Officer (DGEQ) has publicly identified AI as a serious threat to the October 2026 provincial election while acknowledging his institution's limited capacity to respond. Bill 98, adopted in May 2025, created an offense for knowingly spreading false election information with penalties up to $60,000 — but the DGEQ concedes that prosecution under the criminal standard of proof is extremely difficult. Élections Québec received complaints from citizens who obtained incorrect election information from commercial AI chatbots during municipal elections.
The Commission de l'éthique en science et en technologie (CEST) has documented that AI-generated deepfakes disproportionately target women through non-consensual pornographic content, potentially discouraging their political participation — adding a gendered dimension to the election integrity hazard.
The pattern is consistent across jurisdictions: institutional threat assessments identify AI disinformation as significant, but the governance response — legislative frameworks, detection capacity, platform obligations — lags behind the capability that enables the threat.
Major platforms have implemented election integrity policies, including labeling requirements for AI-generated content, restrictions on political advertising, and partnerships with fact-checking organizations. Some AI-generated deepfakes during the 2025 election were identified and labeled by platforms and journalists relatively quickly. The debate centers on whether voluntary platform measures and existing election law provide adequate protection, or whether AI-specific electoral provisions are needed.
Incidents matérialisés
- AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Election
- AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Target 2025 Federal Election
- White House Posted AI-Altered Video Making Ottawa Senators Captain Appear to Say Anti-Canadian Slurs
- AI Face-Swap Video Falsely Showing Ghislaine Maxwell Walking Free in Quebec City Went Viral with 7 Million Views
- PRC Spamouflage Campaigns Used AI-Generated Deepfakes to Target Canadian Politicians and Critics
- Russia's Doppelganger Network Used AI-Generated Content to Target Canadian Political Discourse
Préjudices
Des vidéos d'hypertrucage générées par l'IA de personnalités politiques canadiennes ont atteint des millions de téléspectateurs pendant l'élection fédérale de 2025. Le CST et le SCRS ont évalué que des acteurs étatiques étrangers ont utilisé ou utiliseront probablement du contenu généré par l'IA pour interférer avec les processus démocratiques canadiens.
Les institutions électorales canadiennes et les plateformes de médias sociaux n'ont pas la capacité technique ni l'autorité juridique pour détecter ou contrer la désinformation politique générée par l'IA à grande échelle. La Loi électorale du Canada ne traite pas spécifiquement du contenu généré par l'IA.
Preuves
5 rapports
- Cyber Threats to Canada's Democratic Process: 2023 Update Source principale
CSE identifies AI deepfakes as significant threat to Canadian elections
- Final Report of the Public Inquiry into Foreign Interference in Federal Electoral Processes and Democratic Institutions Source principale
Hogue Commission identified AI-enabled disinformation as part of foreign interference threat
- Deepfakes in the 2025 Canadian Election: Prevalence, Partisanship, and Platform Dynamics Source principale
Academic analysis of deepfake prevalence during 2025 Canadian federal election
- Artificial intelligence: the Quebec electoral officer calls for better legislative oversight Source principale
DGEQ acknowledges AI threats and limited institutional capacity
-
Deepfake video of PM Carney reached millions of viewers
Détails de la fiche
Réponses et résultats
Published updated cyber threats assessment identifying AI deepfakes as significant threat to Canadian democratic processes
Published report documenting AI risks to democratic participation including gendered deepfake harassment
Chief Electoral Officer proposed targeted amendments to the Canada Elections Act to address synthetic media
Supported adoption of Bill 98 creating offense for knowingly spreading false election information
DGEQ publicly warned voters against relying on AI chatbots for election information
Recommandations de politiqueévalué
Amend the Canada Elections Act to explicitly address AI-generated synthetic media used to mislead voters
Elections Canada (1 nov. 2024)Develop technical capacity within Elections Canada and Élections Québec for synthetic media detection
Communications Security Establishment (6 déc. 2023)Require AI platform operators to label, restrict, or redirect election-related queries to official sources during election periods
Élections Québec (8 mars 2026)Establish cross-agency coordination between CSE, CSIS, and Elections Canada for real-time AI disinformation threat monitoring
Public Inquiry into Foreign Interference (Hogue Commission) (28 janv. 2025)Strengthen enforcement mechanisms for Quebec's Bill 98 beyond the criminal standard of proof
Élections Québec (8 mars 2026)Évaluation éditoriale évalué
La désinformation générée par l'IA est apparue à grande échelle lors de l'élection fédérale canadienne de 2025. Les agences de renseignement du Canada évaluent la menace comme importante et croissante. Ni la loi électorale fédérale ni provinciale n'a été conçue pour traiter les médias synthétiques, et les institutions électorales manquent de capacité de détection technique — créant un écart concret et croissant entre la menace et la préparation institutionnelle, avec l'élection québécoise d'octobre 2026 comme prochain test à fort enjeu.
Entités impliquées
Fiches connexes
- AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Target 2025 Federal Electionrelated
- AI Content Moderation Systems Reported to Disproportionately Remove French, Indigenous, and Racialized Contentrelated
- AI-Generated Wildfire Images Spread Emergency Misinformation During British Columbia's 2025 Fire Seasonrelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication consolidating federal and Quebec election integrity hazards |