OpenAI n'a pas alerté les autorités après avoir signalé le compte ChatGPT de la tireuse de Tumbler Ridge
OpenAI a signalé le contenu violent d'un utilisateur des mois avant une fusillade de masse, mais n'a pas alerté les autorités.
In June 2025, OpenAI's content safety systems flagged and subsequently banned a ChatGPT user account for what the company described as "misuses of our models in furtherance of violent activities," detected through "automated tools and human investigations" (CBC News, 2026). OpenAI determined internally that the account activity did not meet its threshold for reporting to law enforcement — specifically, an "imminent and credible risk of serious physical harm" (OpenAI, 2026). The user subsequently created a second ChatGPT account (CBC News, 2026).
On February 10, 2026, Jesse Van Rootselaar, 18, carried out a mass shooting in Tumbler Ridge, British Columbia. She first killed her mother and half-brother at their home, then travelled to Tumbler Ridge Secondary School, where she killed five children aged 12–13 and one education assistant before fatally shooting herself. More than two dozen others were injured. The following day, OpenAI representatives met with the British Columbia government in a meeting that had been scheduled weeks in advance regarding the company's interest in opening a Canadian office (The Globe and Mail, 2026). OpenAI did not disclose during this meeting that it had previously flagged and banned the shooter's account (The Globe and Mail, 2026). On February 12, OpenAI requested help connecting with the RCMP through its provincial contact; the company stated it also reached out to the FBI to relay information to the RCMP (The Globe and Mail, 2026). The company's prior knowledge of the shooter's account became public through subsequent media reporting.
British Columbia Premier David Eby stated that "from the outside, it looks like OpenAI had the opportunity to prevent this tragedy," while adding he was "trying hard not to rush to judgment" (CBC News, 2026). No formal investigation has assessed whether the information would have prevented the attack. Canada's Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, said he was "deeply disturbed" and raised formal concerns about OpenAI's safety protocols, stating that "Canadians expect online platforms, including OpenAI, to have robust safety protocols and escalation practices in place to protect online safety and ensure law enforcement are warned about potential violence" (CBC News, 2026).
The incident highlighted a gap in Canadian AI governance: no legal framework requires AI platforms to report safety threats to Canadian authorities, and no standards exist for how AI companies should assess and act on potentially dangerous user behavior. OpenAI VP of global policy Ann O'Leary wrote to Minister Solomon, disclosing that a second account belonging to Van Rootselaar was discovered after her identity became public, and stating that under safety policies the company began developing "several months ago," the June 2025 account "would be referred to law enforcement if it were discovered today" (OpenAI, 2026). In early March 2026, OpenAI CEO Sam Altman met with Minister Solomon and Premier Eby, agreeing to establish direct contacts with Canadian law enforcement, include Canadian experts in OpenAI's safety office, and strengthen detection of repeat policy violators (CBC News, 2026). The company stated it now employs mental health and behavioral experts to assess high-risk cases (CBC News, 2026).
On March 9, 2026, the family of a 12-year-old survivor filed a civil lawsuit in BC Supreme Court against OpenAI. The lawsuit alleges that approximately 12 OpenAI employees identified the shooter's account content as indicating an imminent risk of serious harm and recommended notifying police, but that the recommendation was rebuffed by leadership (CBC News, 2026; Courthouse News Service). The lawsuit further alleges that ChatGPT provided "information, guidance and assistance" to the shooter in planning the attack, and that the company had "specific knowledge of the shooter's long-range planning of a mass casualty event" (CBC News, 2026; Courthouse News Service). None of these allegations have been proven in court, and OpenAI has not publicly responded to them as of this writing.
Matérialisé à partir de
Préjudices
Les systèmes de sécurité de contenu d'OpenAI ont signalé et banni un compte ChatGPT pour du contenu violent, des mois avant que le titulaire du compte commette une fusillade de masse à Tumbler Ridge, en C.-B., le 10 février 2026. OpenAI n'a pas signalé le compte aux forces de l'ordre, révélant une lacune dans le signalement des menaces entre les plateformes d'IA et les autorités canadiennes.
OpenAI a déterminé en interne que le compte signalé ne franchissait pas son seuil de signalement, n'a pas alerté les forces de l'ordre et n'a pas divulgué sa connaissance préalable aux responsables de la C.-B. lors d'une rencontre le lendemain de la fusillade — la divulgation n'est venue que par des reportages médiatiques subséquents.
Preuves
9 rapports
-
CBC investigation: OpenAI had banned account of Tumbler Ridge shooter months before shooting; documents the initial flagging and ban
- Federal AI minister raises concerns over OpenAI safety protocols after Tumbler Ridge mass shooting Source principale
CBC reporting: federal AI minister raises concerns over OpenAI safety protocols after Tumbler Ridge shooting; documents government response
- OpenAI did not mention Tumbler Ridge shooter's posts in meeting with B.C. officials day after mass shooting: province Source principale
Feb 11 meeting was pre-scheduled for unrelated purpose; OpenAI requested RCMP contact on Feb 12
-
CBC reporting: Tumbler Ridge shooter had second ChatGPT account despite being banned; documents the account recreation and continued use
- OpenAI letter to Minister Solomon Source principale
O'Leary statements on updated policies, second account discovery, and law enforcement referral commitment
- Family of Tumbler Ridge shooting victim suing OpenAI Source principale
Lawsuit allegations: ~12 employees recommended police notification, ChatGPT allegedly provided attack planning assistance
-
Details of lawsuit allegations including employee warnings and leadership response
-
Premier Eby direct quotes on preventability
-
Altman-Solomon meeting, commitments to include Canadian experts and establish RCMP contact
Détails de la fiche
Réponses et résultats
OpenAI representatives met with the British Columbia government in a meeting scheduled weeks in advance regarding the company's interest in opening a Canadian office. OpenAI did not disclose during this meeting that it had previously flagged and banned the shooter's account.
OpenAI requested contact information for the RCMP through its provincial contact and stated it also reached out to the FBI to relay information to the RCMP.
Minister of Artificial Intelligence and Digital Innovation Evan Solomon stated he was 'deeply disturbed' by reports that concerning online activity was not reported to law enforcement in a timely manner, and raised formal concerns about OpenAI's safety protocols.
VP of global policy Ann O'Leary wrote to Minister Solomon disclosing the discovery of a second account, committing to establish direct RCMP contacts, strengthen detection of repeat policy violators, and employ mental health and behavioral experts. Stated the June 2025 account would now be referred to law enforcement under enhanced protocols.
OpenAI CEO Sam Altman met with Minister Solomon and Premier Eby, agreeing to include Canadian experts in OpenAI's safety office, establish direct reporting to RCMP, and provide a full report on new systems to identify high-risk offenders.
Family of a 12-year-old survivor filed civil lawsuit in BC Supreme Court alleging OpenAI had specific knowledge of the shooter's attack planning, that ~12 employees recommended notifying police but were rebuffed, and that ChatGPT provided information and assistance for the attack. Claims have not been proven in court.
Recommandations de politiqueévalué
Établir un cadre juridique canadien obligeant les entreprises d'IA à signaler aux forces de l'ordre les menaces crédibles à la sécurité identifiées par leurs plateformes
Minister of Artificial Intelligence and Digital Innovation (Evan Solomon) (12 févr. 2026)Exiger que les plateformes d'IA opérant au Canada mettent en œuvre des mesures efficaces d'application des interdictions de compte empêchant les utilisateurs bannis de créer de nouveaux comptes
OpenAI (committed to strengthening detection of repeat policy violators) (14 févr. 2026)Obliger les entreprises d'IA à divulguer les informations pertinentes en matière de sécurité aux enquêteurs à la suite d'incidents violents
Minister of Artificial Intelligence and Digital Innovation (Evan Solomon) (12 févr. 2026)Évaluation éditoriale évalué
Aucun cadre canadien n'oblige les entreprises d'IA à signaler aux forces de l'ordre les menaces à la sécurité qu'elles détectent. OpenAI a effectué une évaluation interne des risques sans orientation réglementaire — une décision qui a précédé une fusillade de masse (CBC News, 2026-02-11) et mis en lumière une lacune dans la gouvernance canadienne de l'IA quant aux obligations de signalement obligatoire (CBC News, 2026-02-12).
Entités impliquées
Systèmes d'IA impliqués
The shooter used ChatGPT to engage with content involving gun violence scenarios; OpenAI's content safety systems flagged and banned the account, and the shooter subsequently created a second account
Fiches connexes
Taxonomieévalué
AIID : Incident #1375
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |
| v2 | 10 mars 2026 | Verification review: corrected source titles to match actual headlines; added context that Feb 11 BC meeting was pre-scheduled for unrelated purpose; replaced unsourced 'gun violence' characterization with OpenAI's own language; fixed Premier Eby quote to actual wording; separated response timeline into accurately dated entries; added primary sources (OpenAI letter to Solomon, Globe and Mail, CBC on Eby statement, CBC on Altman meeting); added Gebala v. OpenAI lawsuit (Mar 10) with allegations clearly marked as unproven; added French translations for policy recommendations. |
| v3 | 11 mars 2026 | Verification upgraded from corroborated to confirmed: OpenAI issued official letter to Minister Solomon acknowledging the situation. |