Canada Investigates X and xAI After Grok Generates Millions of Non-Consensual Sexualized Deepfakes
Grok generated 6,700 non-consensual sexualized images per hour, including images of minors, prompting a Canadian probe.
In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, which later added a "Spicy Mode" enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls (AI Incident Database, 2025). Users could reply to any photo on X — including photos of real people — with requests to "undress" the subject, and Grok would publicly post a manipulated image as a reply (CBC News, 2026; Globe and Mail, 2026).
The scale of the abuse was significant. According to AI Forensics, a 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or "nudified" images per hour — 84 times more output than the top five dedicated deepfake websites combined (AI Incident Database, 2025; Wikipedia, 2026). The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026 (Wikipedia, 2026). AI Forensics' analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors (Wikipedia, 2026). The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material (Wikipedia, 2026).
Canada's Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X's use of Canadians' personal information to train AI models (OPC, 2025). On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI (OPC, 2026; CBC News, 2026; Globe and Mail, 2026). The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok (OPC, 2026).
xAI responded to the crisis in several stages. On January 8, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims' advocates as insufficient (Wikipedia, 2026). On January 14, xAI blocked Grok from creating sexualized images of real people (TechPolicy.Press, 2026). On January 16, broader restrictions were implemented (TechPolicy.Press, 2026). However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates (Wikipedia, 2026).
The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland's DPC opened a formal GDPR investigation, the European Commission ordered document retention, France's prosecutors searched X's offices, California's Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI (TechPolicy.Press, 2026; Wikipedia, 2026). In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that federal Criminal Code provisions criminalizing non-consensual intimate images may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity (BetaKit, 2026; OPC, 2026).
Materialized From
Harms
Grok's image generation tool was used at large scale to produce non-consensual sexualized images of women and girls — approximately 6,700 'undressed' images per hour, with over 3 million sexualized images generated in an 11-day window. The tool allowed any user to reply to a photo on X with requests like 'put her in a bikini' and Grok would publicly post a manipulated image.
Approximately 2% of sampled Grok-generated images appeared to depict minors, and the Internet Watch Foundation confirmed some met the legal definition of child sexual abuse material.
Canadians' personal information — including photos posted on X — was collected without consent to train Grok's AI models, and Grok was used to generate sexualized deepfakes of Canadian women and girls without their knowledge or consent.
Evidence
9 reports
-
OPC's original complaint investigation into X social media platform; precursor to expanded Grok investigation
-
OPC expanded investigation into X Corp to address AI-generated sexualized deepfakes; Privacy Commissioner's formal action in January 2026
- Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Primary source
CBC reporting: privacy commissioner expands probe into X after backlash over Grok's sexualized deepfake generation capability
-
AIID cross-reference: Incident 1165 documenting Grok deepfake generation at scale
-
Globe and Mail reporting: privacy watchdog expands probe into X over Grok's sexualized imagery generation; Canadian regulatory response
-
Canadian legal gaps in coverage of AI-generated sexualized content
-
TechPolicy.Press tracker of global regulator responses to Grok 'undressing' controversy; comparative regulatory analysis
-
Privacy Commissioner's statement to ETHI Committee on Grok investigation; testimony on AI-generated non-consensual imagery
-
Wikipedia documentation of Grok sexual deepfake scandal; comprehensive timeline and response tracking
Record details
Responses & Outcomes
Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA
Restricted Grok image generation to paying subscribers only; criticized by multiple lawmakers and advocacy groups as insufficient
Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective
Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes
Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users
Editorial Assessment assessed
A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material (AI Incident Database, 2025). Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update (TechPolicy.Press, 2026). The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content (BetaKit, 2026) — and prompted coordinated regulatory responses from multiple countries (TechPolicy.Press, 2026; OPC, 2026).
Entities Involved
AI Systems Involved
The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour
Related Records
Taxonomyassessed
AIID: Incident #1165
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 11, 2026 | Verification upgraded from corroborated to confirmed: OPC officially expanded investigation and issued statements to ETHI Committee. |
| v2 | Mar 11, 2026 | Neutrality and factuality review: corrected attribution of 6,700 images/hour statistic from CCDH to AI Forensics; corrected paid-subscriber restriction date from January 3 to January 8; softened Spicy Mode timing (added after initial launch, not simultaneous); removed three policy recommendation attributions (editorial paraphrases of OPC investigation scope and ETHI testimony, not direct OPC recommendations). |