This site is a work-in-progress prototype.
Corroborated Severity: Critical Version 1

A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material. Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update. The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content — and prompted coordinated regulatory responses from multiple countries.

Occurred: July 28, 2025 to January 16, 2026 Reported: August 1, 2025

Narrative

In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, with a “Spicy Mode” enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls. Users could reply to any photo on X — including photos of real people — with requests to “undress” the subject, and Grok would publicly post a manipulated image as a reply.

The scale of the abuse was significant. A researcher’s 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or “nudified” images per hour — 84 times more output than the top five dedicated deepfake websites combined. The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026. An AI Forensics analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors. The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material.

Canada’s Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X’s use of Canadians’ personal information to train AI models. On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI. The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok.

xAI responded to the crisis in several stages. On January 3, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims’ advocates as insufficient. On January 14, xAI blocked Grok from creating sexualized images of real people. On January 16, broader restrictions were implemented. However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates.

The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland’s DPC opened a formal GDPR investigation, the European Commission ordered document retention, France’s prosecutors searched X’s offices, California’s Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI. In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that Bill C-16 (Protecting Victims Act), while criminalizing non-consensual sexual deepfakes, may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity.

Harms

Grok's image generation tool was used at large scale to produce non-consensual sexualized images of women and girls — approximately 6,700 'undressed' images per hour, with over 3 million sexualized images generated in an 11-day window. The tool allowed any user to reply to a photo on X with requests like 'put her in a bikini' and Grok would publicly post a manipulated image.

Severe Population

Approximately 2% of sampled Grok-generated images appeared to depict minors, and the Internet Watch Foundation confirmed some met the legal definition of child sexual abuse material. Dark web users cited Grok as a tool for creating criminal imagery of children.

Critical Population

Canadians' personal information — including photos posted on X — was collected without consent to train Grok's AI models, and Grok was used to generate sexualized deepfakes of Canadian women and girls without their knowledge or consent.

Significant Population

Affected Populations

  • women and girls whose photos were non-consensually sexualized
  • minors depicted in AI-generated sexual imagery
  • Canadian X users whose data was used to train Grok
  • Canadian public

Entities Involved

xAI
developer

Developed Grok and its Imagine image generation tool, including 'Spicy Mode' for adult content; implemented safety controls that were repeatedly shown to be ineffective at preventing mass generation of non-consensual sexualized imagery

X Corp
deployer

Operated the X platform where Grok was integrated and where generated sexualized deepfakes were publicly posted as replies to photos; initially restricted Grok to paid subscribers before implementing broader restrictions

Launched initial investigation into X Corp (Feb 2025) over use of Canadians' data to train AI; expanded investigation (Jan 2026) to cover Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI

AI Systems Involved

Grok Imagine

The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour

Responses & Outcomes

Office of the Privacy Commissioner of Canada

Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA

X Corp

Restricted Grok image generation to paying subscribers only; widely criticized as insufficient by lawmakers and victims

xAI

Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective

Office of the Privacy Commissioner of Canada

Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes

X Corp

Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users

AI System Context

xAI's Grok Imagine, an AI image generation tool integrated into the X social media platform. Launched in July 2025 with a "Spicy Mode" enabling adult content generation, the tool allowed users to generate photorealistic manipulations of real people's photos, including sexualized "undressing" of women and girls. At peak output, Grok was generating 84 times more sexualized imagery per hour than the top five dedicated deepfake websites combined.

Preventive Measures

  • Require AI image generation tools to implement robust safeguards against generating sexualized content depicting real people, verified through independent testing before deployment
  • Establish Canadian legal requirements for express opt-in consent before individuals' images can be used to train AI models or be processed by AI image generation systems
  • Enact legislation explicitly criminalizing the creation and distribution of non-consensual AI-generated intimate images, covering the full spectrum from explicit nudity to sexualized alterations
  • Mandate that platforms deploying AI content generation tools conduct pre-deployment safety assessments and maintain ongoing monitoring for abuse at scale
  • Develop regulatory mechanisms to enable suspension of AI features that are being used to generate illegal content at scale

Materialized From

Related Records

Taxonomy

Domain
Media & EntertainmentLaw Enforcement
Harm type
Privacy & Data ExposureDiscrimination & RightsPsychological HarmSurveillance Overreach
AI involvement
Deployment FailureMisuseOversight BreakdownMonitoring Gap
Lifecycle phase
DeploymentMonitoringIncident Response

Sources

  1. Privacy Commissioner of Canada expands investigation into social media platform X following reports of AI-generated sexualized deepfake images Official — Office of the Privacy Commissioner of Canada (Jan 15, 2026)
  2. Privacy Commissioner launches investigation into X Corp Official — Office of the Privacy Commissioner of Canada (Feb 27, 2025)
  3. Statement by the Privacy Commissioner of Canada to ETHI Committee on AI study Official — Office of the Privacy Commissioner of Canada (Feb 2, 2026)
  4. Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Media — CBC News (Jan 15, 2026)
  5. Grok sexual deepfake scandal Other — Wikipedia
  6. Tracking Regulator Responses to the Grok 'Undressing' Controversy Other — TechPolicy.Press (Jan 16, 2026)
  7. Canada's privacy watchdog expands probe into X over Grok's sexualized deepfakes Media — Globe and Mail (Jan 15, 2026)
  8. Grok's non-consensual sexual images highlight gaps in Canada's deepfake laws Media — BetaKit (Jan 15, 2026)
  9. AI Incident Database: Incident 1165 Other — AI Incident Database

AIID: Incident #1165

Changelog

VersionDateChange
v1 Mar 8, 2026 Initial publication