This site is a work-in-progress prototype.
Escalating Confidence: medium Potential severity: Severe Version 1

AI-generated CSAM represents a qualitative shift in the scale and nature of child exploitation material, overwhelming detection systems and creating legal ambiguity — with direct implications for Canadian law enforcement capacity and child protection.

Identified: June 1, 2023 Last assessed: March 8, 2026

Description

Generative AI is enabling the production of child sexual abuse material at a scale and speed that overwhelms existing detection and enforcement infrastructure. The Canadian Centre for Child Protection has documented increasing volumes of AI-generated CSAM. Existing hash-based detection systems like PhotoDNA — designed to identify known images through digital fingerprints — cannot detect AI-generated content because each synthetic image is unique.

The legal framework presents additional challenges. Canada’s Criminal Code provisions on child pornography were drafted for human-produced material. While the provisions may apply to purely synthetic AI-generated CSAM depicting no identifiable real child, prosecutorial practice has not been tested at this scale. The gap between generation capability and detection capability is widening: producing realistic synthetic CSAM requires minimal technical expertise and no access to real children, while detecting it requires investments in new technology that law enforcement agencies have not yet made.

This hazard is the most acute current manifestation of a broader structural pattern: generative AI content production capability outpacing institutional detection and response capacity. The asymmetry between cheap, scalable generation and expensive, fragile detection applies across harm categories, but the consequences are most severe when the content involves child exploitation.

Risk Pathway

Generative AI lowers the cost of producing child sexual abuse material at scale, overwhelming detection systems designed for human-produced content. Existing hash-based detection tools (PhotoDNA) cannot identify AI-generated images because each synthetic image is unique. Canadian law enforcement and child protection agencies lack tools and legal frameworks calibrated for synthetic CSAM. The Criminal Code's provisions on child pornography may apply to AI-generated content, but prosecutorial practice has not been tested at scale and legal ambiguity persists around purely synthetic material depicting no identifiable real child. Open-source image generation models with safety filters removed are accessible on unregulated platforms.

Assessment History

Escalating Confidence: medium Severe

Multiple law enforcement agencies and child protection organizations have documented the emergence of AI-generated CSAM. The Canadian Centre for Child Protection has reported increasing volumes. No comprehensive Canadian prevalence data exists, but international evidence and law enforcement reports indicate rapid growth. Existing hash-based detection systems cannot identify AI-generated images. The Criminal Code's applicability to purely synthetic CSAM has not been tested at scale. Open-source models with safety filters removed are accessible.

Initial assessment. Status set to escalating based on increasing volumes and inadequate detection/legal infrastructure.

Triggers

  • Open-source image generation models becoming more capable and accessible
  • Removal of safety filters from fine-tuned models
  • Growing online communities sharing techniques for generating CSAM
  • Declining cost and increasing realism of generated imagery

Mitigating Factors

  • Platform-level content moderation on major commercial generators
  • Ongoing development of synthetic content detection tools
  • International law enforcement cooperation on CSAM
  • Criminal Code provisions that may extend to synthetic material

Risk Controls

  • Legal framework explicitly criminalizing AI-generated CSAM with penalties equivalent to human-produced material
  • Investment in synthetic content detection tools calibrated for AI-generated imagery
  • Reporting obligations for AI platform operators when their systems are used to generate CSAM
  • International coordination on synthetic CSAM detection, takedown, and cross-border enforcement
  • Mandatory safety testing for image generation models before release, including CSAM generation capability assessment
  • Restrictions on distribution of open-source image generation models with safety filters removed

Materialized Incidents

Affected Populations

  • Children
  • Law enforcement and child protection agencies
  • Survivors of child sexual abuse

Entities Involved

Documented the emergence of AI-generated CSAM and called for coordinated response

Federal law enforcement responsible for investigating CSAM, facing capacity challenges with synthetic content

Responses

Canadian Centre for Child Protection

Published reports documenting AI-generated CSAM trends and calling for legislative action

Related Records

Taxonomy

Domain
JusticePublic Services
Harm type
Safety FailurePsychological Harm
AI involvement
MisuseMonitoring Gap
Lifecycle phase
DeploymentMonitoring

Sources

  1. Canadian Centre for Child Protection aims to strengthen schools' responses to image-based abuse in the AI era Official — Canadian Centre for Child Protection (Feb 10, 2026)
  2. Canadian Centre for Child Protection warns of growing wave of online abuse material since the launch of public AI tools Media — Future of Good
  3. Criminal Code Provisions on Child Pornography Official — Department of Justice Canada

Changelog

VersionDateChange
v1 Mar 8, 2026 Initial publication