Ce site est un prototype en cours de développement.
Corroboré Sévérité : Critique Version 1

A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material. Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update. The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content — and prompted coordinated regulatory responses from multiple countries.

Survenu : 28 juillet 2025 au 16 janvier 2026 Signalé : 1 août 2025

Récit

In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, with a “Spicy Mode” enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls. Users could reply to any photo on X — including photos of real people — with requests to “undress” the subject, and Grok would publicly post a manipulated image as a reply.

The scale of the abuse was significant. A researcher’s 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or “nudified” images per hour — 84 times more output than the top five dedicated deepfake websites combined. The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026. An AI Forensics analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors. The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material.

Canada’s Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X’s use of Canadians’ personal information to train AI models. On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI. The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok.

xAI responded to the crisis in several stages. On January 3, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims’ advocates as insufficient. On January 14, xAI blocked Grok from creating sexualized images of real people. On January 16, broader restrictions were implemented. However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates.

The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland’s DPC opened a formal GDPR investigation, the European Commission ordered document retention, France’s prosecutors searched X’s offices, California’s Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI. In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that Bill C-16 (Protecting Victims Act), while criminalizing non-consensual sexual deepfakes, may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity.

Préjudices

Grok's image generation tool was used at large scale to produce non-consensual sexualized images of women and girls — approximately 6,700 'undressed' images per hour, with over 3 million sexualized images generated in an 11-day window. The tool allowed any user to reply to a photo on X with requests like 'put her in a bikini' and Grok would publicly post a manipulated image.

Grave Population

Approximately 2% of sampled Grok-generated images appeared to depict minors, and the Internet Watch Foundation confirmed some met the legal definition of child sexual abuse material. Dark web users cited Grok as a tool for creating criminal imagery of children.

Critique Population

Canadians' personal information — including photos posted on X — was collected without consent to train Grok's AI models, and Grok was used to generate sexualized deepfakes of Canadian women and girls without their knowledge or consent.

Important Population

Populations touchées

  • women and girls whose photos were non-consensually sexualized
  • minors depicted in AI-generated sexual imagery
  • Canadian X users whose data was used to train Grok
  • Canadian public

Entités impliquées

xAI
developer

Developed Grok and its Imagine image generation tool, including 'Spicy Mode' for adult content; implemented safety controls that were repeatedly shown to be ineffective at preventing mass generation of non-consensual sexualized imagery

X Corp
deployer

Operated the X platform where Grok was integrated and where generated sexualized deepfakes were publicly posted as replies to photos; initially restricted Grok to paid subscribers before implementing broader restrictions

Launched initial investigation into X Corp (Feb 2025) over use of Canadians' data to train AI; expanded investigation (Jan 2026) to cover Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI

Systèmes d'IA impliqués

Grok Imagine

The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour

Réponses et résultats

Commissariat à la protection de la vie privée du Canada

Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA

X Corp

Restricted Grok image generation to paying subscribers only; widely criticized as insufficient by lawmakers and victims

xAI

Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective

Commissariat à la protection de la vie privée du Canada

Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes

X Corp

Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users

Contexte du système d'IA

xAI's Grok Imagine, an AI image generation tool integrated into the X social media platform. Launched in July 2025 with a "Spicy Mode" enabling adult content generation, the tool allowed users to generate photorealistic manipulations of real people's photos, including sexualized "undressing" of women and girls. At peak output, Grok was generating 84 times more sexualized imagery per hour than the top five dedicated deepfake websites combined.

Mesures préventives

  • Require AI image generation tools to implement robust safeguards against generating sexualized content depicting real people, verified through independent testing before deployment
  • Establish Canadian legal requirements for express opt-in consent before individuals' images can be used to train AI models or be processed by AI image generation systems
  • Enact legislation explicitly criminalizing the creation and distribution of non-consensual AI-generated intimate images, covering the full spectrum from explicit nudity to sexualized alterations
  • Mandate that platforms deploying AI content generation tools conduct pre-deployment safety assessments and maintain ongoing monitoring for abuse at scale
  • Develop regulatory mechanisms to enable suspension of AI features that are being used to generate illegal content at scale

Matérialisé à partir de

Fiches connexes

Taxonomie

Domaine
MédiasApplication de la loi
Type de préjudice
Vie privée et donnéesDiscrimination et droitsPréjudice psychologiqueExcès de surveillance
Implication de l'IA
Défaillance de déploiementUtilisation abusiveDéfaillance de supervisionLacune de surveillance
Phase du cycle de vie
DéploiementSurveillanceRéponse aux incidents

Sources

  1. Privacy Commissioner of Canada expands investigation into social media platform X following reports of AI-generated sexualized deepfake images Officiel — Office of the Privacy Commissioner of Canada (15 janv. 2026)
  2. Privacy Commissioner launches investigation into X Corp Officiel — Office of the Privacy Commissioner of Canada (27 févr. 2025)
  3. Statement by the Privacy Commissioner of Canada to ETHI Committee on AI study Officiel — Office of the Privacy Commissioner of Canada (2 févr. 2026)
  4. Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Média — CBC News (15 janv. 2026)
  5. Grok sexual deepfake scandal Autre — Wikipedia
  6. Tracking Regulator Responses to the Grok 'Undressing' Controversy Autre — TechPolicy.Press (16 janv. 2026)
  7. Canada's privacy watchdog expands probe into X over Grok's sexualized deepfakes Média — Globe and Mail (15 janv. 2026)
  8. Grok's non-consensual sexual images highlight gaps in Canada's deepfake laws Média — BetaKit (15 janv. 2026)
  9. AI Incident Database: Incident 1165 Autre — AI Incident Database

AIID : Incident #1165

Historique des modifications

VersionDateModification
v1 8 mars 2026 Initial publication