Matériel d'exploitation sexuelle d'enfants généré par l'IA au Canada
Le matériel d'exploitation sexuelle d'enfants généré par l'IA submerge les systèmes de détection et crée une ambiguïté juridique, avec des implications directes pour les forces de l'ordre et la protection de l'enfance.
Generative AI is enabling the production of child sexual abuse material at a scale and speed that outpaces existing detection and enforcement infrastructure. The Canadian Centre for Child Protection has documented increasing volumes of AI-generated CSAM. Existing hash-based detection systems like PhotoDNA — designed to identify known images through digital fingerprints — cannot detect AI-generated content because each synthetic image is unique.
The legal framework presents additional challenges. Canada's Criminal Code provisions on child pornography were drafted for human-produced material. While the provisions may apply to purely synthetic AI-generated CSAM depicting no identifiable real child, prosecution is still in early stages — Steven Larouche of Sherbrooke, Quebec was sentenced in 2023 to over three years for creating deepfake child pornography, in what the presiding judge described as the first such case in Canada. The gap between generation capability and detection capability is widening: producing realistic synthetic CSAM requires minimal technical expertise and no access to real children, while detecting it requires investments in new technology that law enforcement agencies have not yet made.
This hazard is the most acute current manifestation of a broader structural pattern: generative AI content production capability outpacing institutional detection and response capacity. The asymmetry between cheap, scalable generation and expensive, fragile detection applies across harm categories, but the consequences are most severe when the content involves child exploitation.
AI developers have implemented content policies prohibiting CSAM generation, and some platforms have added technical safeguards to prevent their models from producing such content. International efforts to develop detection tools for AI-generated imagery are underway. The challenge lies in the gap between platform-level controls and the availability of open-source models that lack equivalent safeguards.
Incidents matérialisés
- AI-Generated Child Sexual Abuse Material in Canada
- Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photos
Préjudices
L'IA générative permet la production de matériel d'exploitation sexuelle d'enfants photoréaliste à grande échelle sans accès à de vrais enfants. Le Centre canadien de protection de l'enfance a documenté des volumes croissants de MESE généré par l'IA, et les systèmes de détection basés sur le hachage comme PhotoDNA ne peuvent pas identifier les images synthétiques car chacune est unique.
Le MESE généré par l'IA peut être utilisé pour faire du leurre de vrais enfants en normalisant la sexualisation des mineurs, créant de nouveaux vecteurs d'exploitation des enfants qui ne nécessitent pas un acte initial d'abus pour produire du matériel.
Une ambiguïté juridique persiste autour de la poursuite du MESE purement synthétique généré par l'IA ne représentant aucun enfant réel identifiable. Bien que l'affaire Larouche (2023, Sherbrooke) ait abouti à une condamnation, l'écart entre la capacité de génération et la capacité de détection s'élargit, menaçant de submerger la capacité des forces de l'ordre.
Preuves
6 rapports
- Police and child protection agency say parents need to know about sexually explicit AI deepfakes Source principale
C3P warning about AI-generated deepfakes of children
- Canadian Centre for Child Protection aims to strengthen schools' responses to image-based abuse in the AI era Source principale
C3P documenting surge in AI-generated deepfakes and updating school guidance
- Canadian Centre for Child Protection warns of growing wave of online abuse material since the launch of public AI tools Source principale
C3P reporting increasing volumes of AI-generated CSAM overwhelming detection systems
-
Canada's National Strategy for the Protection of Children from Sexual Exploitation on the Internet; policy framework predating AI-specific challenges
-
Criminal Code framework for child pornography offenses
-
CBC reporting on rise of AI deepfakes affecting students; experts urge curriculum updates to address AI-generated sexual violence
Détails de la fiche
Réponses et résultats
A publié des rapports documentant les tendances du MESE généré par l'IA et appelant à une action législative
Recommandations de politiqueévalué
Legal framework explicitly criminalizing AI-generated CSAM with penalties equivalent to human-produced material
Canadian Centre for Child Protection (15 mars 2024)Investment in synthetic content detection tools calibrated for AI-generated imagery
Canadian Centre for Child Protection (15 mars 2024)Reporting obligations for AI platform operators when their systems are used to generate CSAM
Canadian Centre for Child Protection (15 mars 2024)International coordination on synthetic CSAM detection, takedown, and cross-border enforcement
Canadian Centre for Child Protection (15 mars 2024)Évaluation éditoriale évalué
Le MESE généré par l'IA représente un changement qualitatif dans l'ampleur et la nature du matériel d'exploitation des enfants, submergeant les systèmes de détection et créant une ambiguïté juridique — avec des implications directes pour la capacité des forces de l'ordre canadiennes et la protection de l'enfance.
Entités impliquées
Fiches connexes
- Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photosrelated
- AI-Generated Non-Consensual Intimate Imageryrelated
- Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photosrelated
- AI-Generated Non-Consensual Intimate Imageryrelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |