Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Severe Confidence: medium

AI-generated child sexual abuse material is outpacing detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.

Identified: June 1, 2023 Last assessed: March 8, 2026

Generative AI is enabling the production of child sexual abuse material at a scale and speed that outpaces existing detection and enforcement infrastructure. The Canadian Centre for Child Protection has documented increasing volumes of AI-generated CSAM. Existing hash-based detection systems like PhotoDNA — designed to identify known images through digital fingerprints — cannot detect AI-generated content because each synthetic image is unique.

The legal framework presents additional challenges. Canada's Criminal Code provisions on child pornography were drafted for human-produced material. While the provisions may apply to purely synthetic AI-generated CSAM depicting no identifiable real child, prosecution is still in early stages — Steven Larouche of Sherbrooke, Quebec was sentenced in 2023 to over three years for creating deepfake child pornography, in what the presiding judge described as the first such case in Canada. The gap between generation capability and detection capability is widening: producing realistic synthetic CSAM requires minimal technical expertise and no access to real children, while detecting it requires investments in new technology that law enforcement agencies have not yet made.

This hazard is the most acute current manifestation of a broader structural pattern: generative AI content production capability outpacing institutional detection and response capacity. The asymmetry between cheap, scalable generation and expensive, fragile detection applies across harm categories, but the consequences are most severe when the content involves child exploitation.

AI developers have implemented content policies prohibiting CSAM generation, and some platforms have added technical safeguards to prevent their models from producing such content. International efforts to develop detection tools for AI-generated imagery are underway. The challenge lies in the gap between platform-level controls and the availability of open-source models that lack equivalent safeguards.

Materialized Incidents

Harms

Generative AI enables production of photorealistic child sexual abuse material at scale without access to real children. The Canadian Centre for Child Protection has documented increasing volumes of AI-generated CSAM, and hash-based detection systems like PhotoDNA cannot identify synthetic images because each is unique.

Non-Consensual ImagerySafety IncidentSeverePopulation

AI-generated CSAM can be used to groom real children by normalizing the sexualization of minors, creating new vectors for child exploitation that do not require an initial act of abuse to produce material.

Safety IncidentPsychological HarmSeverePopulation

Legal ambiguity persists around prosecution of purely synthetic AI-generated CSAM depicting no identifiable real child. While the Larouche case (2023, Sherbrooke) resulted in conviction, the gap between generation capability and detection capability is widening, threatening to overwhelm law enforcement capacity.

Safety IncidentSignificantSector

Evidence

6 reports

  1. Official — Canadian Centre for Child Protection (Jun 18, 2024)

    C3P warning about AI-generated deepfakes of children

  2. Official — Canadian Centre for Child Protection (Feb 10, 2026)

    C3P documenting surge in AI-generated deepfakes and updating school guidance

  3. Media — Future of Good (Feb 10, 2026)

    C3P reporting increasing volumes of AI-generated CSAM overwhelming detection systems

  4. Official — Public Safety Canada (Apr 1, 2004)

    Canada's National Strategy for the Protection of Children from Sexual Exploitation on the Internet; policy framework predating AI-specific challenges

  5. Official — Department of Justice Canada (Jun 1, 2013)

    Criminal Code framework for child pornography offenses

  6. Media — CBC News (Jan 9, 2024)

    CBC reporting on rise of AI deepfakes affecting students; experts urge curriculum updates to address AI-generated sexual violence

Record details

Responses & Outcomes

Canadian Centre for Child Protectioninstitutional actionActive

Published reports documenting AI-generated CSAM trends and calling for legislative action

Policy Recommendationsassessed

Legal framework explicitly criminalizing AI-generated CSAM with penalties equivalent to human-produced material

Canadian Centre for Child Protection (Mar 15, 2024)

Investment in synthetic content detection tools calibrated for AI-generated imagery

Canadian Centre for Child Protection (Mar 15, 2024)

Reporting obligations for AI platform operators when their systems are used to generate CSAM

Canadian Centre for Child Protection (Mar 15, 2024)

International coordination on synthetic CSAM detection, takedown, and cross-border enforcement

Canadian Centre for Child Protection (Mar 15, 2024)

Editorial Assessment assessed

AI-generated CSAM represents a shift in the scale and nature of child exploitation material. Existing hash-based detection systems cannot identify AI-generated content because each image is unique. AI developers have implemented content policies, but open-source models present different enforcement challenges. The legal framework's application to fully synthetic imagery that depicts no real child raises unresolved questions with implications for Canadian law enforcement and child protection.

Entities Involved

Related Records

Taxonomyassessed

Domain
JusticePublic Services
Harm type
Safety IncidentPsychological HarmNon-Consensual Imagery
AI pathway
Use Beyond Intended ScopeMonitoring Absent
Lifecycle phase
DeploymentMonitoring

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication

Version 1