Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Corroborated Severe

Generative AI is producing photorealistic child sexual abuse material at scale, outpacing Canadian detection systems.

Occurred: January 1, 2023 (year)

The proliferation of generative AI image models has created an emerging and documented concern for child safety: the ability to generate photorealistic child sexual abuse material (CSAM) using text-to-image AI tools. The Canadian Centre for Child Protection (C3P), which operates Cybertip.ca and Project Arachnid, has identified AI-generated CSAM as an escalating concern, with reports of synthetic abuse imagery increasing from 2023 onward (Canadian Centre for Child Protection, 2024).

Open-source image generation models can be fine-tuned or prompted to produce exploitative imagery of children. Unlike traditional CSAM, which documents actual abuse, AI-generated material can be produced at scale without requiring access to a victim — but child protection organizations warn it normalizes the sexualization of children, can be used to groom real victims, and threatens to overwhelm the detection infrastructure that organizations like C3P have built over decades (Canadian Centre for Child Protection, 2024). Hash-based detection systems like PhotoDNA, designed to match known CSAM images, are not designed to identify novel AI-generated content.

Canadian law addresses CSAM through Criminal Code provisions that cover visual representations depicting minors in sexual activity, which is widely interpreted as covering synthetic material, though definitive appellate-level interpretation remains limited. Prosecution of AI-generated CSAM cases is still in early stages — Steven Larouche of Sherbrooke, Quebec was sentenced in April 2023 to a total of eight years — approximately three and a half years for creating at least seven deepfake child pornography videos, and four and a half years for possessing over 545,000 files of child sexual abuse material (Canadian Centre for Child Protection, 2024). The presiding judge described it as the first case in Canada involving deepfakes of child sexual exploitation. The volume of synthetic material risks straining law enforcement resources, and distinguishing AI-generated from real imagery is increasingly difficult.

Canadian law enforcement, including the RCMP's National Child Exploitation Coordination Centre (NCECC), and child protection organizations are calling for coordinated action: stronger model-level safeguards from AI developers, updated legal frameworks, new detection technologies, and international cooperation to address a transnational problem that accessible generative AI tools make worse (CBC News, 2024; Public Safety Canada, 2004).

Materialized From

Harms

Generative AI tools enabled the creation of photorealistic child sexual abuse material at scale, which child protection organizations warn normalizes the sexualization of children, providing new vectors for grooming real victims, and posing challenges for hash-based detection systems like PhotoDNA.

Safety IncidentMisinformationPsychological HarmSeverePopulation

AI-generated CSAM blurs the line between real and synthetic abuse imagery, complicating criminal prosecution and threatening to divert law enforcement resources from cases involving real child victims.

Safety IncidentMisinformationPsychological HarmSignificantPopulation

Children depicted in or targeted through AI-generated exploitative material face psychological harm, including through the use of such material for grooming.

Safety IncidentMisinformationPsychological HarmSevereGroup

Evidence

3 reports

  1. Official — Canadian Centre for Child Protection (Jun 18, 2024)

    C3P documented increasing AI-generated CSAM; close to 4,000 sexually explicit deepfake images and videos of children processed in one year; deepfakes used for sextortion of minors

  2. Official — Public Safety Canada (Apr 1, 2004)

    Canada's National Strategy for the Protection of Children from Sexual Exploitation on the Internet — policy framework context

  3. Media — CBC News (Jan 9, 2024)

    Rise of AI deepfakes affecting students; experts urge education curriculum updates to address AI-generated sexual violence

Record details

Responses & Outcomes

Canadian Centre for Child Protectioninstitutional actionActive

Issued public warning about AI-generated deepfakes of children, urging parents to be aware of the threat and calling for stronger protections

Policy Recommendationsassessed

AI model developers should implement safeguards against CSAM generation, including content classifiers and training data audits

Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (Jan 1, 1970)

Investment in detection tools capable of identifying AI-generated CSAM, given that hash-matching systems like PhotoDNA cannot detect novel synthetic content

Canadian Centre for Child Protection (public advocacy and Project Arachnid program) (Jan 1, 1970)

Parents and educators should be informed about the risks of AI-generated deepfakes involving children, and about available reporting mechanisms

Canadian Centre for Child Protection (Jun 18, 2024)

Editorial Assessment assessed

AI-generated CSAM overwhelms existing detection systems, complicates criminal prosecution by blurring the line between real and synthetic imagery, and creates new vectors for child exploitation (Canadian Centre for Child Protection, 2024). Whether Canada's Criminal Code provisions on CSAM apply to the full range of AI-generated material remains to be tested in court.

Entities Involved

AI Systems Involved

ChatGPT

Generative AI tools used to produce child sexual abuse material; the Larouche case in Quebec involved AI-generated deepfake CSAM

Related Records

Taxonomyassessed

Domain
JusticeLaw Enforcement
Harm type
Safety IncidentMisinformationPsychological Harm
AI pathway
Use Beyond Intended ScopeMonitoring Absent
Lifecycle phase
DeploymentMonitoring

AIID: Incident #604

Changelog

Changelog
VersionDateChange
v1Mar 7, 2026Initial publication
v2Mar 11, 2026Corrected Larouche sentence to include full 8-year total and possession charges; fixed RCMP unit name; replaced fabricated policy recommendation attributions; added Larouche case to FR narrative; softened editorial framing

Version 2