AI-Generated Non-Consensual Intimate Imagery
AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework does not specifically address AI-generated intimate imagery.
Generative AI has made it possible to create realistic non-consensual sexualized imagery of any person from a single clothed photograph. The largest documented case occurred when xAI's Grok chatbot generated approximately 6,700 "undressed" images per hour — over 3 million total — before the capability was restricted. Approximately 2% of those images depicted minors, crossing into child sexual abuse material territory.
The Privacy Commissioner of Canada expanded its ongoing investigation into X Corp in January 2026 to specifically address AI-generated sexualized deepfakes. The Commissioner's testimony to the ETHI Committee highlighted AI-generated NCII as a priority concern.
The harm is gendered: research consistently shows that non-consensual intimate imagery disproportionately targets women and girls. The CEST documented in a 2024 report that deepfakes overwhelmingly target women, often in the form of non-consensual pornographic content. When AI makes this harm scalable and accessible, the impact on women's participation in public life — political, professional, social — becomes a structural equality concern.
Following the incidents described, xAI restricted the image generation capabilities that enabled mass NCII production. Several jurisdictions internationally have moved to address AI-generated NCII through legislation. AI developers have generally implemented content policies prohibiting NCII generation, though enforcement varies and open-source models present different challenges.
Materialized Incidents
Harms
xAI's Grok chatbot generated approximately 6,700 'undressed' images per hour — over 3 million total — before the capability was restricted. Approximately 2% depicted minors. The Privacy Commissioner expanded its X Corp investigation to address AI-generated sexualized deepfakes.
Generative AI enables creation of realistic non-consensual sexualized imagery from a single clothed photo. Victims experience documented psychological harm including anxiety, social withdrawal, and professional consequences. Canadian law (the Intimate Images and Cyber-Protection Act and Criminal Code amendments) is untested against AI-generated imagery at this scale.
Evidence
3 reports
-
OPC expanded investigation to cover AI-generated sexualized deepfakes on X
- Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Primary source
Grok generated approximately 6,700 undressed images per hour, ~2% depicted minors
-
Gaps in Canadian law for addressing AI-generated NCII
Record details
Responses & Outcomes
Restricted Grok's ability to generate NCII after public backlash and regulatory scrutiny
Expanded investigation into X Corp to include AI-generated sexualized deepfake images
Policy Recommendationsassessed
Criminal Code amendments addressing AI-generated NCII with provisions adapted for synthetic content
Office of the Privacy Commissioner of Canada (Jan 15, 2026)Platform liability for failing to prevent NCII generation at scale
Office of the Privacy Commissioner of Canada (Jan 15, 2026)Recourse mechanisms for victims of AI-generated NCII including expedited takedown
Commission de l'éthique en science et en technologie (Jan 1, 2024)Editorial Assessment assessed
A major AI platform generated over 3 million non-consensual sexualized images — including of minors — before safety controls were applied. The platform subsequently restricted these capabilities. Canada's Privacy Commissioner has expanded its investigation into X. Criminal Code section 162.1, drafted before AI generation existed, raises unresolved evidentiary questions when applied to synthetic imagery. Research documents disproportionate impact on women and girls.
Entities Involved
AI Systems Involved
Generated approximately 6,700 "undressed" images per hour, with over 3 million total, approximately 2% depicting minors
Related Records
- Canada Investigates X and xAI After Grok Generates Millions of Non-Consensual Sexualized Deepfakesrelated
- AI-Generated Child Sexual Abuse Material in Canadarelated
- PRC Spamouflage Campaigns Used AI-Generated Deepfakes to Target Canadian Politicians and Criticsrelated
- AI-Generated Child Sexual Abuse Material in Canadarelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |