Ce site est un prototype en cours de développement.
Corroboré Sévérité : Critique Version 1

No Canadian framework requires AI companies to report flagged safety threats to law enforcement. OpenAI made an internal risk assessment that a concerning account did not meet its threshold for reporting — a decision that preceded a mass shooting and highlighted a gap in Canadian AI governance regarding mandatory reporting obligations.

Survenu : 1 juin 2025 (month) Signalé : 11 février 2026

Récit

In approximately June 2025, OpenAI’s automated content safety systems flagged and subsequently banned a ChatGPT user account for content involving scenarios of gun violence. OpenAI determined internally that the account activity did not meet its threshold for reporting to law enforcement as an “imminent and credible risk.” The user subsequently created a second ChatGPT account.

On February 10, 2026, Jesse Van Rootselaar, 18, carried out a mass shooting in Tumbler Ridge, British Columbia. She first killed her mother and half-brother at their home, then travelled to Tumbler Ridge Secondary School, where she killed five children aged 12–13 and one education assistant before fatally shooting herself. More than two dozen others were injured. OpenAI met with the British Columbia government the day after the shooting but did not disclose that it had previously flagged and banned the shooter’s account. This information became public only through subsequent media reporting.

British Columbia’s Premier stated that the shooting “could have potentially been prevented” had OpenAI shared the information it had. Canada’s federal AI minister raised formal concerns about the company’s conduct. The incident highlighted a gap in Canadian AI governance: no legal framework requires AI platforms to report safety threats to Canadian authorities, and no standards exist for how AI companies should assess and act on potentially dangerous user behavior identified through their systems.

The case raises difficult questions about the appropriate role of AI companies in public safety. OpenAI made an internal risk assessment with no regulatory guidance on when or how to report threats to Canadian authorities. The absence of a Canadian reporting framework meant a private company was making consequential safety decisions without external oversight or obligation to cooperate with authorities. OpenAI CEO Sam Altman subsequently met with Canadian officials, agreed to establish direct contacts with Canadian law enforcement, and committed to strengthening detection of repeat policy violators. OpenAI VP of global policy Ann O’Leary disclosed that a second account belonging to Van Rootselaar was discovered after her identity became public, and stated that under updated safety policies developed “several months ago,” the June 2025 account “would be referred to law enforcement if it were discovered today.” The company now employs mental health and behavioral experts to assess high-risk cases.

Préjudices

OpenAI flagged and banned a ChatGPT account for content involving gun violence scenarios months before the user carried out a mass shooting at Tumbler Ridge Secondary School in BC on February 10, 2026, killing eight people — the shooter's mother and half-brother at home, five children aged 12–13 and one education assistant at the school — before fatally shooting herself.

Critique Groupe

OpenAI determined internally that the flagged account did not meet its reporting threshold, failed to alert law enforcement, and did not disclose its prior knowledge to BC officials until media reporting forced disclosure.

Important Population

Populations touchées

  • victims of the Tumbler Ridge school shooting
  • families
  • Tumbler Ridge community
  • Canadian public

Entités impliquées

OpenAI
developerdeployer

Developed and operated ChatGPT; its automated safety systems flagged and banned the shooter's account months before the attack, but OpenAI decided the activity did not meet its threshold for reporting to law enforcement and did not disclose its prior knowledge to BC officials until forced by media reporting

Systèmes d'IA impliqués

ChatGPT

The shooter used ChatGPT to engage with content involving gun violence scenarios; OpenAI's content safety systems flagged and banned the account, and the shooter subsequently created a second account

Réponses et résultats

OpenAI

Met with British Columbia government the day after the shooting but did not disclose prior knowledge of the shooter's account; subsequently, CEO Sam Altman met with Canadian officials, agreed to establish direct contacts with Canadian law enforcement, and committed to strengthening detection of repeat policy violators. VP Ann O'Leary stated updated policies would now trigger a law enforcement referral for the June 2025 account.

canada

AI and Digital Innovation Minister Evan Solomon stated he was 'deeply disturbed' and that Canadians expect platforms to have robust safety protocols to protect public safety and warn law enforcement about potential violence

Contexte du système d'IA

OpenAI's ChatGPT platform and its internal automated content safety screening systems, which flagged and banned a user account for content involving scenarios of gun violence.

Mesures préventives

  • Establish a Canadian legal framework requiring AI companies to report credible safety threats identified through their platforms to law enforcement, analogous to mandatory reporting obligations in other sectors
  • Require AI platforms operating in Canada to implement effective account ban enforcement that prevents banned users from creating new accounts
  • Create clear guidelines for AI companies on threat assessment thresholds and reporting obligations, developed in consultation with law enforcement and public safety experts
  • Mandate that AI companies disclose relevant safety information to investigators in the aftermath of violent incidents, rather than withholding it until public disclosure

Fiches connexes

Taxonomie

Domaine
Services publicsÉducation
Type de préjudice
Défaillance de sécuritéDéfaillance opérationnelle
Implication de l'IA
Lacune de surveillanceDéfaillance de supervision
Phase du cycle de vie
SurveillanceRéponse aux incidents

Sources

  1. OpenAI banned Tumbler Ridge shooter's ChatGPT account months before attack Média — CBC News (11 févr. 2026)
  2. Federal AI minister raises concerns over OpenAI and Tumbler Ridge shooting Média — CBC News (12 févr. 2026)
  3. Tumbler Ridge shooter created second ChatGPT account after ban Média — CBC News (14 févr. 2026)

AIID : Incident #1375

Historique des modifications

VersionDateModification
v1 8 mars 2026 Initial publication