AI-Generated Child Sexual Abuse Material in Canada
Generative AI is producing photorealistic child sexual abuse material at scale, outpacing Canadian detection systems.
The proliferation of generative AI image models has created an emerging and documented concern for child safety: the ability to generate photorealistic child sexual abuse material (CSAM) using text-to-image AI tools. The Canadian Centre for Child Protection (C3P), which operates Cybertip.ca and Project Arachnid, has identified AI-generated CSAM as an escalating concern, with reports of synthetic abuse imagery increasing from 2023 onward (Canadian Centre for Child Protection, 2024).
Open-source image generation models can be fine-tuned or prompted to produce exploitative imagery of children. Unlike traditional CSAM, which documents actual abuse, AI-generated material can be produced at scale without requiring access to a victim — but child protection organizations warn it normalizes the sexualization of children, can be used to groom real victims, and threatens to overwhelm the detection infrastructure that organizations like C3P have built over decades (Canadian Centre for Child Protection, 2024). Hash-based detection systems like PhotoDNA, designed to match known CSAM images, are not designed to identify novel AI-generated content.
Canadian law addresses CSAM through Criminal Code provisions that cover visual representations depicting minors in sexual activity, which is widely interpreted as covering synthetic material, though definitive appellate-level interpretation remains limited. Prosecution of AI-generated CSAM cases is still in early stages — Steven Larouche of Sherbrooke, Quebec was sentenced in April 2023 to a total of eight years — approximately three and a half years for creating at least seven deepfake child pornography videos, and four and a half years for possessing over 545,000 files of child sexual abuse material (Canadian Centre for Child Protection, 2024). The presiding judge described it as the first case in Canada involving deepfakes of child sexual exploitation. The volume of synthetic material risks straining law enforcement resources, and distinguishing AI-generated from real imagery is increasingly difficult.
Canadian law enforcement, including the RCMP's National Child Exploitation Coordination Centre (NCECC), and child protection organizations are calling for coordinated action: stronger model-level safeguards from AI developers, updated legal frameworks, new detection technologies, and international cooperation to address a transnational problem that accessible generative AI tools make worse (CBC News, 2024; Public Safety Canada, 2004).
Materialized From
Harms
Generative AI tools enabled the creation of photorealistic child sexual abuse material at scale, which child protection organizations warn normalizes the sexualization of children, providing new vectors for grooming real victims, and posing challenges for hash-based detection systems like PhotoDNA.
AI-generated CSAM blurs the line between real and synthetic abuse imagery, complicating criminal prosecution and threatening to divert law enforcement resources from cases involving real child victims.
Children depicted in or targeted through AI-generated exploitative material face psychological harm, including through the use of such material for grooming.
Evidence
3 reports
- Police and child protection agency say parents need to know about sexually explicit AI deepfakes Primary source
C3P documented increasing AI-generated CSAM; close to 4,000 sexually explicit deepfake images and videos of children processed in one year; deepfakes used for sextortion of minors
-
Canada's National Strategy for the Protection of Children from Sexual Exploitation on the Internet — policy framework context
-
Rise of AI deepfakes affecting students; experts urge education curriculum updates to address AI-generated sexual violence
Record details
Responses & Outcomes
Issued public warning about AI-generated deepfakes of children, urging parents to be aware of the threat and calling for stronger protections
Policy Recommendationsassessed
AI model developers should implement safeguards against CSAM generation, including content classifiers and training data audits
Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (Jan 1, 1970)Investment in detection tools capable of identifying AI-generated CSAM, given that hash-matching systems like PhotoDNA cannot detect novel synthetic content
Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (Jan 1, 1970)Parents and educators should be informed about the risks of AI-generated deepfakes involving children, and about available reporting mechanisms
Canadian Centre for Child Protection (Jun 18, 2024)Editorial Assessment assessed
AI-generated CSAM overwhelms existing detection systems, complicates criminal prosecution by blurring the line between real and synthetic imagery, and creates new vectors for child exploitation (Canadian Centre for Child Protection, 2024). Whether Canada's Criminal Code provisions on CSAM apply to the full range of AI-generated material remains to be tested in court.
Entities Involved
AI Systems Involved
Generative AI tools used to produce child sexual abuse material; the Larouche case in Quebec involved AI-generated deepfake CSAM
Related Records
- Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photosrelated
Taxonomyassessed
AIID: Incident #604
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 7, 2026 | Initial publication |
| v2 | Mar 11, 2026 | Corrected Larouche sentence to include full 8-year total and possession charges; fixed RCMP unit name; replaced fabricated policy recommendation attributions; added Larouche case to FR narrative; softened editorial framing |