This site is a work-in-progress prototype.
Escalating Confidence: high Potential severity: Severe Version 1

A major AI platform generated over 3 million non-consensual sexualized images — including of minors — before safety controls intervened. Canada's Privacy Commissioner has expanded its investigation into X, but the legal framework for AI-generated NCII has significant gaps. This is targeted, personalized harm at industrial scale, with disproportionate impact on women and girls.

Identified: July 28, 2025 Last assessed: March 8, 2026

Description

Generative AI has made it possible to create realistic non-consensual sexualized imagery of any person from a single clothed photograph. The most dramatic demonstration occurred when xAI’s Grok chatbot generated approximately 6,700 “undressed” images per hour — over 3 million total — before the capability was restricted. Approximately 2% of those images depicted minors, crossing into child sexual abuse material territory.

The Privacy Commissioner of Canada expanded its ongoing investigation into X Corp in January 2026 to specifically address AI-generated sexualized deepfakes. The Commissioner’s testimony to the ETHI Committee highlighted AI-generated NCII as a priority concern.

The legal framework has significant gaps. Criminal Code section 162.1, which addresses non-consensual distribution of intimate images, was drafted before AI generation existed. Proving that an AI-generated image depicts an identifiable real person creates evidentiary challenges. No Canadian law requires AI platforms to prevent their systems from generating NCII, to test models against NCII generation capability before deployment, or to report when NCII generation occurs at scale.

The harm is gendered: research consistently shows that non-consensual intimate imagery disproportionately targets women and girls. The CEST documented in a 2024 report that deepfakes overwhelmingly target women, often in the form of non-consensual pornographic content. When AI makes this harm scalable and accessible, the impact on women’s participation in public life — political, professional, social — becomes a structural equality concern.

Risk Pathway

Generative AI enables the creation of non-consensual sexualized imagery of real individuals at unprecedented scale and accessibility. Unlike traditional image manipulation, current AI tools can generate realistic nude or sexualized images from a single clothed photo. The Grok chatbot generated approximately 6,700 "undressed" images per hour, with over 3 million generated before the feature was restricted. Approximately 2% depicted minors, crossing into CSAM territory. Canada's Criminal Code addresses some forms of non-consensual intimate images (s. 162.1) but was drafted before AI generation capabilities existed. The law requires proof that the image depicts a real identifiable person, creating evidentiary challenges for AI-generated content. No Canadian law requires AI platforms to prevent their systems from generating NCII, and no mandatory reporting obligation exists when such generation occurs at scale.

Assessment History

Escalating Confidence: high Severe

Grok generated over 3 million non-consensual sexualized images at a rate of ~6,700 per hour, with approximately 2% depicting minors. The OPC expanded its investigation into X to cover AI-generated sexualized deepfakes. Multiple "undressing" tools remain available on unregulated platforms. Canadian law has significant gaps — Criminal Code s. 162.1 was not drafted for AI-generated content. The ETHI Committee received testimony from the Privacy Commissioner on AI-generated NCII. Status is escalating because NCII generation tools are proliferating while governance remains inadequate.

Initial assessment. Status escalating based on confirmed industrial-scale NCII generation and proliferating tools.

Triggers

  • AI image generation models with safety filters removable or absent
  • "Undressing" tools becoming more accessible on unregulated platforms
  • No legal requirement for pre-deployment safety testing against NCII generation
  • Social media platforms hosting AI-generated NCII without detection

Mitigating Factors

  • OPC investigation creating regulatory scrutiny
  • xAI restricting Grok's NCII generation capability after public backlash
  • ETHI Committee study on AI examining NCII
  • Growing international regulatory attention (EU, UK, Australia)

Risk Controls

  • Explicit prohibition on AI platforms generating non-consensual sexualized imagery of identifiable individuals
  • Mandatory safety testing for image generation models against NCII generation
  • Criminal Code amendments addressing AI-generated NCII with provisions adapted for synthetic content
  • Platform liability for failing to prevent NCII generation at scale
  • Reporting obligations when AI systems generate NCII
  • Recourse mechanisms for victims of AI-generated NCII including expedited takedown

Affected Populations

  • Women and girls disproportionately targeted
  • Minors (~2% of Grok NCII output depicted minors)
  • Public figures and celebrities
  • Any person whose photo is publicly available online

Entities Involved

xAI
developer

Developed Grok AI chatbot that generated over 3 million non-consensual sexualized images

X Corp
deployer

Platform through which Grok generated and distributed NCII

Expanded investigation into X Corp to include AI-generated sexualized deepfakes

AI Systems Involved

Grok Imagine

Generated approximately 6,700 "undressed" images per hour, with over 3 million total, approximately 2% depicting minors

Responses

Office of the Privacy Commissioner of Canada

Expanded investigation into X Corp to include AI-generated sexualized deepfake images

xAI

Restricted Grok's ability to generate NCII after public backlash and regulatory scrutiny

Related Records

Taxonomy

Domain
Media & EntertainmentJustice
Harm type
Psychological HarmPrivacy & Data ExposureDiscrimination & Rights
AI involvement
MisuseMonitoring GapSafety Control Subversion
Lifecycle phase
DeploymentMonitoring

Sources

  1. Privacy Commissioner of Canada expands investigation into social media platform X following reports of AI-generated sexualized deepfake images Official — Office of the Privacy Commissioner of Canada (Jan 15, 2026)
  2. Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Media — CBC News (Jan 15, 2026)
  3. Grok's non-consensual sexual images highlight gaps in Canada's deepfake laws Media — BetaKit (Jan 15, 2026)

Changelog

VersionDateChange
v1 Mar 8, 2026 Initial publication