Le Canada enquête sur X et xAI après que Grok ait généré des millions d'hypertrucages sexualisés non consensuels
Grok a généré 6 700 images sexualisées non consensuelles par heure, dont des images de mineurs, déclenchant une enquête canadienne.
In July 2025, xAI launched Grok Imagine, an AI image generation tool integrated into the X social media platform, which later added a "Spicy Mode" enabling generation of adult content. The tool was rapidly used at large scale to produce non-consensual sexualized images of women and girls (AI Incident Database, 2025). Users could reply to any photo on X — including photos of real people — with requests to "undress" the subject, and Grok would publicly post a manipulated image as a reply (CBC News, 2026; Globe and Mail, 2026).
The scale of the abuse was significant. According to AI Forensics, a 24-hour analysis found Grok generating approximately 6,700 sexually suggestive or "nudified" images per hour — 84 times more output than the top five dedicated deepfake websites combined (AI Incident Database, 2025; Wikipedia, 2026). The Center for Countering Digital Hate estimated over 3 million sexualized images were generated in an 11-day window in late December 2025 to early January 2026 (Wikipedia, 2026). AI Forensics' analysis of 20,000 Grok-generated images found 53% depicted women in minimal attire and approximately 2% appeared to depict minors (Wikipedia, 2026). The Internet Watch Foundation confirmed that some Grok-generated images met the legal definition of child sexual abuse material (Wikipedia, 2026).
Canada's Privacy Commissioner Philippe Dufresne had launched an initial investigation into X Corp in February 2025, following a complaint from NDP MP Brian Masse about X's use of Canadians' personal information to train AI models (OPC, 2025). On January 15, 2026, the Commissioner expanded the investigation to address the deepfake crisis, now targeting both X Corp and xAI (OPC, 2026; CBC News, 2026; Globe and Mail, 2026). The investigation examines whether valid consent was obtained from individuals for the collection, use, and disclosure of their personal information to create deepfakes via Grok (OPC, 2026).
xAI responded to the crisis in several stages. On January 8, X restricted Grok to paid subscribers — a measure criticized by lawmakers and victims' advocates as insufficient (Wikipedia, 2026). On January 14, xAI blocked Grok from creating sexualized images of real people (TechPolicy.Press, 2026). On January 16, broader restrictions were implemented (TechPolicy.Press, 2026). However, independent testing by Malwarebytes in February 2026 and by other researchers found that Grok continued to produce sexualized images after each round of updates (Wikipedia, 2026).
The incident prompted coordinated regulatory responses across multiple jurisdictions: Ireland's DPC opened a formal GDPR investigation, the European Commission ordered document retention, France's prosecutors searched X's offices, California's Attorney General issued a cease-and-desist, Indonesia and Malaysia blocked Grok entirely, and 35 US state attorneys general issued a joint demand to xAI (TechPolicy.Press, 2026; Wikipedia, 2026). In Canada, the incident highlighted gaps in privacy and criminal law — legal experts noted that federal Criminal Code provisions criminalizing non-consensual intimate images may not cover many types of AI-generated sexualized content that fall below the threshold of explicit nudity (BetaKit, 2026; OPC, 2026).
Matérialisé à partir de
Préjudices
L'outil de génération d'images de Grok a été utilisé à grande échelle pour produire des images sexualisées non consensuelles de femmes et de filles — environ 6 700 images « déshabillées » par heure, avec plus de 3 millions d'images sexualisées générées en 11 jours. L'outil permettait à n'importe quel utilisateur de répondre à une photo sur X avec des demandes comme « habille-la en bikini » et Grok publiait une image manipulée en réponse.
Environ 2 % des images générées par Grok échantillonnées semblaient représenter des mineures, et l'Internet Watch Foundation a confirmé que certaines répondaient à la définition juridique de matériel d'exploitation sexuelle d'enfants.
Les renseignements personnels de Canadiens — y compris les photos publiées sur X — ont été collectés sans consentement pour entraîner les modèles d'IA de Grok, et Grok a été utilisé pour générer des hypertrucages sexualisés de femmes et de filles canadiennes sans leur connaissance ni leur consentement.
Preuves
9 rapports
-
OPC's original complaint investigation into X social media platform; precursor to expanded Grok investigation
- Privacy Commissioner of Canada expands investigation into social media platform X following reports of AI-generated sexualized deepfake images Source principale
OPC expanded investigation into X Corp to address AI-generated sexualized deepfakes; Privacy Commissioner's formal action in January 2026
- Canada's privacy commissioner expands probe into X after backlash over Grok's sexual deepfakes Source principale
CBC reporting: privacy commissioner expands probe into X after backlash over Grok's sexualized deepfake generation capability
-
AIID cross-reference: Incident 1165 documenting Grok deepfake generation at scale
-
Globe and Mail reporting: privacy watchdog expands probe into X over Grok's sexualized imagery generation; Canadian regulatory response
-
Canadian legal gaps in coverage of AI-generated sexualized content
-
TechPolicy.Press tracker of global regulator responses to Grok 'undressing' controversy; comparative regulatory analysis
-
Privacy Commissioner's statement to ETHI Committee on Grok investigation; testimony on AI-generated non-consensual imagery
-
Wikipedia documentation of Grok sexual deepfake scandal; comprehensive timeline and response tracking
Détails de la fiche
Réponses et résultats
Launched investigation into X Corp following complaint from NDP MP Brian Masse, examining X's collection, use, and disclosure of Canadians' personal information to train AI models under PIPEDA
Restricted Grok image generation to paying subscribers only; criticized by multiple lawmakers and advocacy groups as insufficient
Blocked Grok from creating sexualized images of real people; subsequent testing by Malwarebytes and other researchers found the restrictions were ineffective
Expanded investigation to address Grok's generation of sexualized deepfakes, now targeting both X Corp and xAI under PIPEDA; investigating whether valid consent was obtained for collection and use of personal information to create deepfakes
Implemented broader restrictions barring Grok from generating or editing images of real people in revealing clothing for all users
Évaluation éditoriale évalué
Une grande plateforme de médias sociaux a intégré un outil de génération d'images par IA utilisé à grande échelle pour produire des images sexualisées non consensuelles, incluant du matériel d'exploitation sexuelle d'enfants (AI Incident Database, 2025; Wikipedia, 2026). Des contrôles de sécurité ont été mis en place en plusieurs étapes, mais des tests indépendants les ont jugés inefficaces après chaque mise à jour (Wikipedia, 2026). L'incident a révélé des lacunes dans la loi canadienne sur la vie privée — la législation existante pourrait ne pas couvrir de nombreux types de contenu sexualisé généré par l'IA (BetaKit, 2026) — et a suscité des réponses réglementaires coordonnées de plusieurs pays (TechPolicy.Press, 2026; OPC, 2026).
Entités impliquées
Systèmes d'IA impliqués
The AI image generation tool used to create millions of non-consensual sexualized images of real people, including minors, at a rate of approximately 6,700 'undressed' images per hour
Fiches connexes
Taxonomieévalué
AIID : Incident #1165
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |
| v2 | 11 mars 2026 | Verification upgraded from corroborated to confirmed: OPC officially expanded investigation and issued statements to ETHI Committee. |
| v2 | 11 mars 2026 | Neutrality and factuality review: corrected attribution of 6,700 images/hour statistic from CCDH to AI Forensics; corrected paid-subscriber restriction date from January 3 to January 8; softened Spicy Mode timing (added after initial launch, not simultaneous); removed three policy recommendation attributions (editorial paraphrases of OPC investigation scope and ETHI testimony, not direct OPC recommendations). |