Canada Investigates X and xAI After Grok Generates Millions of Non-Consensual Sexualized Deepfakes
A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material. Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update. The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content — and prompted coordinated regulatory responses from multiple countries.
Narrative
In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, with a “Spicy Mode” enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls. Users could reply to any photo on X — including photos of real people — with requests to “undress” the subject, and Grok would publicly post a manipulated image as a reply.
The scale of the abuse was significant. A researcher’s 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or “nudified” images per hour — 84 times more output than the top five dedicated deepfake websites combined. The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026. An AI Forensics analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors. The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material.
Canada’s Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X’s use of Canadians’ personal information to train AI models. On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI. The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok.
xAI responded to the crisis in several stages. On January 3, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims’ advocates as insufficient. On January 14, xAI blocked Grok from creating sexualized images of real people. On January 16, broader restrictions were implemented. However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates.
The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland’s DPC opened a formal GDPR investigation, the European Commission ordered document retention, France’s prosecutors searched X’s offices, California’s Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI. In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that Bill C-16 (Protecting Victims Act), while criminalizing non-consensual sexual deepfakes, may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity.
Harms
Grok's image generation tool was used at large scale to produce non-consensual sexualized images of women and girls — approximately 6,700 'undressed' images per hour, with over 3 million sexualized images generated in an 11-day window. The tool allowed any user to reply to a photo on X with requests like 'put her in a bikini' and Grok would publicly post a manipulated image.
Approximately 2% of sampled Grok-generated images appeared to depict minors, and the Internet Watch Foundation confirmed some met the legal definition of child sexual abuse material. Dark web users cited Grok as a tool for creating criminal imagery of children.
Canadians' personal information — including photos posted on X — was collected without consent to train Grok's AI models, and Grok was used to generate sexualized deepfakes of Canadian women and girls without their knowledge or consent.
Affected Populations
- women and girls whose photos were non-consensually sexualized
- minors depicted in AI-generated sexual imagery
- Canadian X users whose data was used to train Grok
- Canadian public
Entities Involved
Developed Grok and its Imagine image generation tool, including 'Spicy Mode' for adult content; implemented safety controls that were repeatedly shown to be ineffective at preventing mass generation of non-consensual sexualized imagery
Operated the X platform where Grok was integrated and where generated sexualized deepfakes were publicly posted as replies to photos; initially restricted Grok to paid subscribers before implementing broader restrictions
Launched initial investigation into X Corp (Feb 2025) over use of Canadians' data to train AI; expanded investigation (Jan 2026) to cover Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI
AI Systems Involved
The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour
Responses & Outcomes
Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA
Restricted Grok image generation to paying subscribers only; widely criticized as insufficient by lawmakers and victims
Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective
Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes
Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users
AI System Context
xAI's Grok Imagine, an AI image generation tool integrated into the X social media platform. Launched in July 2025 with a "Spicy Mode" enabling adult content generation, the tool allowed users to generate photorealistic manipulations of real people's photos, including sexualized "undressing" of women and girls. At peak output, Grok was generating 84 times more sexualized imagery per hour than the top five dedicated deepfake websites combined.
Preventive Measures
- Require AI image generation tools to implement robust safeguards against generating sexualized content depicting real people, verified through independent testing before deployment
- Establish Canadian legal requirements for express opt-in consent before individuals' images can be used to train AI models or be processed by AI image generation systems
- Enact legislation explicitly criminalizing the creation and distribution of non-consensual AI-generated intimate images, covering the full spectrum from explicit nudity to sexualized alterations
- Mandate that platforms deploying AI content generation tools conduct pre-deployment safety assessments and maintain ongoing monitoring for abuse at scale
- Develop regulatory mechanisms to enable suspension of AI features that are being used to generate illegal content at scale
Materialized From
Related Records
Taxonomy
Sources
- Privacy Commissioner of Canada expands investigation into social media platform X following reports of AI-generated sexualized deepfake images
- Privacy Commissioner launches investigation into X Corp
- Statement by the Privacy Commissioner of Canada to ETHI Committee on AI study
- Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes
- Grok sexual deepfake scandal
- Tracking Regulator Responses to the Grok 'Undressing' Controversy
- Canada's privacy watchdog expands probe into X over Grok's sexualized deepfakes
- Grok's non-consensual sexual images highlight gaps in Canada's deepfake laws
- AI Incident Database: Incident 1165
AIID: Incident #1165
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |