Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Confirmed Contested Critical

Grok generated 6,700 non-consensual sexualized images per hour, including images of minors, prompting a Canadian probe.

Occurred: July 28, 2025 to January 16, 2026 Reported: August 1, 2025

In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, which later added a "Spicy Mode" enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls (AI Incident Database, 2025). Users could reply to any photo on X — including photos of real people — with requests to "undress" the subject, and Grok would publicly post a manipulated image as a reply (CBC News, 2026; Globe and Mail, 2026).

The scale of the abuse was significant. According to AI Forensics, a 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or "nudified" images per hour — 84 times more output than the top five dedicated deepfake websites combined (AI Incident Database, 2025; Wikipedia, 2026). The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026 (Wikipedia, 2026). AI Forensics' analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors (Wikipedia, 2026). The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material (Wikipedia, 2026).

Canada's Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X's use of Canadians' personal information to train AI models (OPC, 2025). On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI (OPC, 2026; CBC News, 2026; Globe and Mail, 2026). The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok (OPC, 2026).

xAI responded to the crisis in several stages. On January 8, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims' advocates as insufficient (Wikipedia, 2026). On January 14, xAI blocked Grok from creating sexualized images of real people (TechPolicy.Press, 2026). On January 16, broader restrictions were implemented (TechPolicy.Press, 2026). However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates (Wikipedia, 2026).

The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland's DPC opened a formal GDPR investigation, the European Commission ordered document retention, France's prosecutors searched X's offices, California's Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI (TechPolicy.Press, 2026; Wikipedia, 2026). In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that federal Criminal Code provisions criminalizing non-consensual intimate images may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity (BetaKit, 2026; OPC, 2026).

Materialized From

Harms

Grok's image generation tool was used at large scale to produce non-consensual sexualized images of women and girls — approximately 6,700 'undressed' images per hour, with over 3 million sexualized images generated in an 11-day window. The tool allowed any user to reply to a photo on X with requests like 'put her in a bikini' and Grok would publicly post a manipulated image.

Privacy & Data ExposureDiscrimination & RightsPsychological HarmDisproportionate SurveillanceSeverePopulation

Approximately 2% of sampled Grok-generated images appeared to depict minors, and the Internet Watch Foundation confirmed some met the legal definition of child sexual abuse material.

Privacy & Data ExposureDiscrimination & RightsPsychological HarmDisproportionate SurveillanceCriticalPopulation

Canadians' personal information — including photos posted on X — was collected without consent to train Grok's AI models, and Grok was used to generate sexualized deepfakes of Canadian women and girls without their knowledge or consent.

Privacy & Data ExposureDiscrimination & RightsPsychological HarmDisproportionate SurveillanceSignificantPopulation

Evidence

9 reports

  1. Official — Office of the Privacy Commissioner of Canada (Feb 27, 2025)

    OPC's original complaint investigation into X social media platform; precursor to expanded Grok investigation

  2. Official — Office of the Privacy Commissioner of Canada (Jan 15, 2026)

    OPC expanded investigation into X Corp to address AI-generated sexualized deepfakes; Privacy Commissioner's formal action in January 2026

  3. Media — CBC News (Jan 15, 2026)

    CBC reporting: privacy commissioner expands probe into X after backlash over Grok's sexualized deepfake generation capability

  4. Other — AI Incident Database (Aug 5, 2025)

    AIID cross-reference: Incident 1165 documenting Grok deepfake generation at scale

  5. Media — Globe and Mail (Jan 15, 2026)

    Globe and Mail reporting: privacy watchdog expands probe into X over Grok's sexualized imagery generation; Canadian regulatory response

  6. Media — BetaKit (Jan 15, 2026)

    Canadian legal gaps in coverage of AI-generated sexualized content

  7. Other — TechPolicy.Press (Jan 16, 2026)

    TechPolicy.Press tracker of global regulator responses to Grok 'undressing' controversy; comparative regulatory analysis

  8. Official — Office of the Privacy Commissioner of Canada (Feb 2, 2026)

    Privacy Commissioner's statement to ETHI Committee on Grok investigation; testimony on AI-generated non-consensual imagery

  9. Other — Wikipedia (Feb 9, 2026)

    Wikipedia documentation of Grok sexual deepfake scandal; comprehensive timeline and response tracking

Record details

Responses & Outcomes

Office of the Privacy Commissioner of CanadainvestigationActive

Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA

X Corpinstitutional actionActive

Restricted Grok image generation to paying subscribers only; criticized by multiple lawmakers and advocacy groups as insufficient

xAIinstitutional actionActive

Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective

Office of the Privacy Commissioner of CanadainvestigationActive

Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes

X Corpinstitutional actionActive

Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users

Editorial Assessment assessed

A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material (AI Incident Database, 2025). Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update (TechPolicy.Press, 2026). The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content (BetaKit, 2026) — and prompted coordinated regulatory responses from multiple countries (TechPolicy.Press, 2026; OPC, 2026).

Entities Involved

AI Systems Involved

Grok Imagine

The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour

Related Records

Taxonomyassessed

Domain
Media & EntertainmentLaw Enforcement
Harm type
Privacy & Data ExposureDiscrimination & RightsPsychological HarmDisproportionate Surveillance
AI pathway
Deployment ContextUse Beyond Intended ScopeOversight AbsentMonitoring Absent
Lifecycle phase
DeploymentMonitoringIncident Response

AIID: Incident #1165

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 11, 2026Verification upgraded from corroborated to confirmed: OPC officially expanded investigation and issued statements to ETHI Committee.
v2Mar 11, 2026Neutrality and factuality review: corrected attribution of 6,700 images/hour statistic from CCDH to AI Forensics; corrected paid-subscriber restriction date from January 3 to January 8; softened Spicy Mode timing (added after initial launch, not simultaneous); removed three policy recommendation attributions (editorial paraphrases of OPC investigation scope and ETHI testimony, not direct OPC recommendations).

Version 2