Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
En escalade Grave Confiance: medium

La désinformation générée par l'IA est apparue à grande échelle lors de l'élection fédérale de 2025. La loi électorale n'encadre pas les médias synthétiques et la capacité de détection est minimale.

Identifié: 6 décembre 2023 Dernière évaluation: 8 mars 2026

Generative AI is creating concrete threats to the integrity of Canadian elections at both the federal and provincial levels. During the 2025 federal election, AI-generated deepfake videos of Prime Minister Mark Carney reached millions of viewers on TikTok, Facebook, and X. Over 40 Facebook pages ran fraudulent investment scams using AI-generated likenesses of Carney and Dragon's Den personalities. Academic analysis documented the prevalence and platform dynamics of election deepfakes.

Canada's intelligence agencies have assessed the threat as significant and growing. The Communications Security Establishment's 2023 update on cyber threats to Canada's democratic process identified generative AI as making it easier for state and non-state actors to produce convincing disinformation. The Hogue Commission's final report on foreign interference identified AI-enabled disinformation as part of the broader threat landscape. CSE noted that the barrier to creating high-quality synthetic content has dropped substantially.

The legislative and institutional response has not kept pace. The Canada Elections Act was drafted before generative AI existed. While it prohibits certain misleading communications, it does not address synthetic media. The Chief Electoral Officer proposed targeted amendments in November 2024 but no legislation has been introduced. Elections Canada lacks dedicated technical capacity for synthetic media detection.

At the provincial level, Quebec's Chief Electoral Officer (DGEQ) has publicly identified AI as a serious threat to the October 2026 provincial election while acknowledging his institution's limited capacity to respond. Bill 98, adopted in May 2025, created an offense for knowingly spreading false election information with penalties up to $60,000 — but the DGEQ concedes that prosecution under the criminal standard of proof is extremely difficult. Élections Québec received complaints from citizens who obtained incorrect election information from commercial AI chatbots during municipal elections.

The Commission de l'éthique en science et en technologie (CEST) has documented that AI-generated deepfakes disproportionately target women through non-consensual pornographic content, potentially discouraging their political participation — adding a gendered dimension to the election integrity hazard.

The pattern is consistent across jurisdictions: institutional threat assessments identify AI disinformation as significant, but the governance response — legislative frameworks, detection capacity, platform obligations — lags behind the capability that enables the threat.

Major platforms have implemented election integrity policies, including labeling requirements for AI-generated content, restrictions on political advertising, and partnerships with fact-checking organizations. Some AI-generated deepfakes during the 2025 election were identified and labeled by platforms and journalists relatively quickly. The debate centers on whether voluntary platform measures and existing election law provide adequate protection, or whether AI-specific electoral provisions are needed.

Incidents matérialisés

Préjudices

Des vidéos d'hypertrucage générées par l'IA de personnalités politiques canadiennes ont atteint des millions de téléspectateurs pendant l'élection fédérale de 2025. Le CST et le SCRS ont évalué que des acteurs étatiques étrangers ont utilisé ou utiliseront probablement du contenu généré par l'IA pour interférer avec les processus démocratiques canadiens.

DésinformationGravePopulation

Les institutions électorales canadiennes et les plateformes de médias sociaux n'ont pas la capacité technique ni l'autorité juridique pour détecter ou contrer la désinformation politique générée par l'IA à grande échelle. La Loi électorale du Canada ne traite pas spécifiquement du contenu généré par l'IA.

DésinformationAutonomie compromiseImportantPopulation

Preuves

5 rapports

  1. Officiel — Communications Security Establishment (6 déc. 2023)

    CSE identifies AI deepfakes as significant threat to Canadian elections

  2. Officiel — Public Inquiry into Foreign Interference (Hogue Commission) (28 janv. 2025)

    Hogue Commission identified AI-enabled disinformation as part of foreign interference threat

  3. Académique — arXiv (18 déc. 2025)

    Academic analysis of deepfake prevalence during 2025 Canadian federal election

  4. Média — CTV News (Canadian Press) (8 mars 2026)

    DGEQ acknowledges AI threats and limited institutional capacity

  5. Académique — DFRLab (Atlantic Council) (19 juin 2025)

    Deepfake video of PM Carney reached millions of viewers

Détails de la fiche

Réponses et résultats

Centre de la sécurité des télécommunicationsinstitutional actionActif

Published updated cyber threats assessment identifying AI deepfakes as significant threat to Canadian democratic processes

Commission de l'éthique en science et en technologieinstitutional actionActif

Published report documenting AI risks to democratic participation including gendered deepfake harassment

Élections CanadalegislationActif

Chief Electoral Officer proposed targeted amendments to the Canada Elections Act to address synthetic media

Élections QuébeclegislationActif

Supported adoption of Bill 98 creating offense for knowingly spreading false election information

Élections Québecinstitutional actionActif

DGEQ publicly warned voters against relying on AI chatbots for election information

Recommandations de politiqueévalué

Amend the Canada Elections Act to explicitly address AI-generated synthetic media used to mislead voters

Elections Canada (1 nov. 2024)

Develop technical capacity within Elections Canada and Élections Québec for synthetic media detection

Communications Security Establishment (6 déc. 2023)

Require AI platform operators to label, restrict, or redirect election-related queries to official sources during election periods

Élections Québec (8 mars 2026)

Establish cross-agency coordination between CSE, CSIS, and Elections Canada for real-time AI disinformation threat monitoring

Public Inquiry into Foreign Interference (Hogue Commission) (28 janv. 2025)

Strengthen enforcement mechanisms for Quebec's Bill 98 beyond the criminal standard of proof

Élections Québec (8 mars 2026)

Évaluation éditoriale évalué

La désinformation générée par l'IA est apparue à grande échelle lors de l'élection fédérale canadienne de 2025. Les agences de renseignement du Canada évaluent la menace comme importante et croissante. Ni la loi électorale fédérale ni provinciale n'a été conçue pour traiter les médias synthétiques, et les institutions électorales manquent de capacité de détection technique — créant un écart concret et croissant entre la menace et la préparation institutionnelle, avec l'élection québécoise d'octobre 2026 comme prochain test à fort enjeu.

Entités impliquées

Fiches connexes

Taxonomieévalué

Domaine
Élections et intégrité de l'information
Type de préjudice
DésinformationAutonomie compromise
Voie de contribution de l'IA
Utilisation au-delà de la portée prévueSurveillance absente
Phase du cycle de vie
DéploiementSurveillance

Historique des modifications

Historique des modifications
VersionDateModification
v18 mars 2026Initial publication consolidating federal and Quebec election integrity hazards

Version 1