Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Severe Confidence: high

AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework does not specifically address AI-generated intimate imagery.

Identified: July 28, 2025 Last assessed: March 8, 2026

Generative AI has made it possible to create realistic non-consensual sexualized imagery of any person from a single clothed photograph. The largest documented case occurred when xAI's Grok chatbot generated approximately 6,700 "undressed" images per hour — over 3 million total — before the capability was restricted. Approximately 2% of those images depicted minors, crossing into child sexual abuse material territory.

The Privacy Commissioner of Canada expanded its ongoing investigation into X Corp in January 2026 to specifically address AI-generated sexualized deepfakes. The Commissioner's testimony to the ETHI Committee highlighted AI-generated NCII as a priority concern.

The harm is gendered: research consistently shows that non-consensual intimate imagery disproportionately targets women and girls. The CEST documented in a 2024 report that deepfakes overwhelmingly target women, often in the form of non-consensual pornographic content. When AI makes this harm scalable and accessible, the impact on women's participation in public life — political, professional, social — becomes a structural equality concern.

Following the incidents described, xAI restricted the image generation capabilities that enabled mass NCII production. Several jurisdictions internationally have moved to address AI-generated NCII through legislation. AI developers have generally implemented content policies prohibiting NCII generation, though enforcement varies and open-source models present different challenges.

Materialized Incidents

Harms

xAI's Grok chatbot generated approximately 6,700 'undressed' images per hour — over 3 million total — before the capability was restricted. Approximately 2% depicted minors. The Privacy Commissioner expanded its X Corp investigation to address AI-generated sexualized deepfakes.

Non-Consensual ImageryPrivacy & Data ExposureSeverePopulation

Generative AI enables creation of realistic non-consensual sexualized imagery from a single clothed photo. Victims experience documented psychological harm including anxiety, social withdrawal, and professional consequences. Canadian law (the Intimate Images and Cyber-Protection Act and Criminal Code amendments) is untested against AI-generated imagery at this scale.

Non-Consensual ImageryPsychological HarmSeverePopulation

Evidence

3 reports

  1. Official — Office of the Privacy Commissioner of Canada (Jan 15, 2026)

    OPC expanded investigation to cover AI-generated sexualized deepfakes on X

  2. Media — CBC News (Jan 15, 2026)

    Grok generated approximately 6,700 undressed images per hour, ~2% depicted minors

  3. Media — BetaKit (Jan 15, 2026)

    Gaps in Canadian law for addressing AI-generated NCII

Record details

Responses & Outcomes

xAIinstitutional actionActive

Restricted Grok's ability to generate NCII after public backlash and regulatory scrutiny

Office of the Privacy Commissioner of CanadainvestigationActive

Expanded investigation into X Corp to include AI-generated sexualized deepfake images

Policy Recommendationsassessed

Criminal Code amendments addressing AI-generated NCII with provisions adapted for synthetic content

Office of the Privacy Commissioner of Canada (Jan 15, 2026)

Platform liability for failing to prevent NCII generation at scale

Office of the Privacy Commissioner of Canada (Jan 15, 2026)

Recourse mechanisms for victims of AI-generated NCII including expedited takedown

Commission de l'éthique en science et en technologie (Jan 1, 2024)

Editorial Assessment assessed

A major AI platform generated over 3 million non-consensual sexualized images — including of minors — before safety controls were applied. The platform subsequently restricted these capabilities. Canada's Privacy Commissioner has expanded its investigation into X. Criminal Code section 162.1, drafted before AI generation existed, raises unresolved evidentiary questions when applied to synthetic imagery. Research documents disproportionate impact on women and girls.

Entities Involved

AI Systems Involved

Grok Imagine

Generated approximately 6,700 "undressed" images per hour, with over 3 million total, approximately 2% depicting minors

Related Records

Taxonomyassessed

Domain
Media & EntertainmentJustice
Harm type
Psychological HarmPrivacy & Data ExposureDiscrimination & RightsNon-Consensual Imagery
AI pathway
Use Beyond Intended ScopeMonitoring AbsentSafety Mechanism Ineffective
Lifecycle phase
DeploymentMonitoring

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication

Version 1