Matériel d'exploitation sexuelle d'enfants généré par l'IA au Canada
L'IA générative produit du matériel d'abus sexuel d'enfants photoréaliste à grande échelle, submergeant les systèmes de détection canadiens.
The proliferation of generative AI image models has created an emerging and documented concern for child safety: the ability to generate photorealistic child sexual abuse material (CSAM) using text-to-image AI tools. The Canadian Centre for Child Protection (C3P), which operates Cybertip.ca and Project Arachnid, has identified AI-generated CSAM as an escalating concern, with reports of synthetic abuse imagery increasing from 2023 onward (Canadian Centre for Child Protection, 2024).
Open-source image generation models can be fine-tuned or prompted to produce exploitative imagery of children. Unlike traditional CSAM, which documents actual abuse, AI-generated material can be produced at scale without requiring access to a victim — but child protection organizations warn it normalizes the sexualization of children, can be used to groom real victims, and threatens to overwhelm the detection infrastructure that organizations like C3P have built over decades (Canadian Centre for Child Protection, 2024). Hash-based detection systems like PhotoDNA, designed to match known CSAM images, are not designed to identify novel AI-generated content.
Canadian law addresses CSAM through Criminal Code provisions that cover visual representations depicting minors in sexual activity, which is widely interpreted as covering synthetic material, though definitive appellate-level interpretation remains limited. Prosecution of AI-generated CSAM cases is still in early stages — Steven Larouche of Sherbrooke, Quebec was sentenced in April 2023 to a total of eight years — approximately three and a half years for creating at least seven deepfake child pornography videos, and four and a half years for possessing over 545,000 files of child sexual abuse material (Canadian Centre for Child Protection, 2024). The presiding judge described it as the first case in Canada involving deepfakes of child sexual exploitation. The volume of synthetic material risks straining law enforcement resources, and distinguishing AI-generated from real imagery is increasingly difficult.
Canadian law enforcement, including the RCMP's National Child Exploitation Coordination Centre (NCECC), and child protection organizations are calling for coordinated action: stronger model-level safeguards from AI developers, updated legal frameworks, new detection technologies, and international cooperation to address a transnational problem that accessible generative AI tools make worse (CBC News, 2024; Public Safety Canada, 2004).
Matérialisé à partir de
Préjudices
Les outils d'IA générative ont permis la création de matériel d'exploitation sexuelle d'enfants photoréaliste à grande échelle, ce qui, selon les organisations de protection de l'enfance, normalise la sexualisation des enfants, fournit de nouveaux vecteurs de leurre de vraies victimes et pose des défis aux systèmes de détection par empreinte numérique comme PhotoDNA.
Le MESEI généré par l'IA brouille la frontière entre imagerie d'abus réelle et synthétique, compliquant les poursuites pénales et menaçant de détourner les ressources des forces de l'ordre des affaires impliquant de véritables victimes.
Les enfants représentés dans du matériel exploitatif généré par l'IA ou ciblés par celui-ci subissent un préjudice psychologique, notamment par l'utilisation de ce matériel à des fins de leurre.
Preuves
3 rapports
- Police and child protection agency say parents need to know about sexually explicit AI deepfakes Source principale
C3P documented increasing AI-generated CSAM; close to 4,000 sexually explicit deepfake images and videos of children processed in one year; deepfakes used for sextortion of minors
-
Canada's National Strategy for the Protection of Children from Sexual Exploitation on the Internet — policy framework context
-
Rise of AI deepfakes affecting students; experts urge education curriculum updates to address AI-generated sexual violence
Détails de la fiche
Réponses et résultats
Issued public warning about AI-generated deepfakes of children, urging parents to be aware of the threat and calling for stronger protections
Recommandations de politiqueévalué
AI model developers should implement safeguards against CSAM generation, including content classifiers and training data audits
Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (1 janv. 1970)Investment in detection tools capable of identifying AI-generated CSAM, given that hash-matching systems like PhotoDNA cannot detect novel synthetic content
Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (1 janv. 1970)Parents and educators should be informed about the risks of AI-generated deepfakes involving children, and about available reporting mechanisms
Canadian Centre for Child Protection (18 juin 2024)Évaluation éditoriale évalué
Le MESEI généré par l'IA submerge les systèmes de détection existants, complique les poursuites pénales en brouillant la frontière entre imagerie réelle et synthétique, et crée de nouveaux vecteurs d'exploitation des enfants (Canadian Centre for Child Protection, 2024; CBC News, 2024). La question de savoir si les dispositions du Code criminel canadien sur le MESEI s'appliquent à l'ensemble du matériel généré par l'IA reste à trancher par les tribunaux.
Entités impliquées
Systèmes d'IA impliqués
Generative AI tools used to produce child sexual abuse material; the Larouche case in Quebec involved AI-generated deepfake CSAM
Fiches connexes
- Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photosrelated
Taxonomieévalué
AIID : Incident #604
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 7 mars 2026 | Initial publication |
| v2 | 11 mars 2026 | Corrected Larouche sentence to include full 8-year total and possession charges; fixed RCMP unit name; replaced fabricated policy recommendation attributions; added Larouche case to FR narrative; softened editorial framing |