Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Confirmed Contested Critical

OpenAI's safety systems flagged and banned a ChatGPT account for violent content in June 2025. The account holder carried out a mass shooting in Tumbler Ridge, BC in February 2026. OpenAI had not reported the flagged account to law enforcement. The incident prompted federal calls for mandatory AI safety reporting requirements.

Occurred: February 10, 2026 Reported: February 11, 2026

In June 2025, OpenAI's content safety systems flagged and subsequently banned a ChatGPT user account for what the company described as "misuses of our models in furtherance of violent activities," detected through "automated tools and human investigations" (CBC News, 2026). OpenAI determined internally that the account activity did not meet its threshold for reporting to law enforcement — specifically, an "imminent and credible risk of serious physical harm" (OpenAI, 2026). The user subsequently created a second ChatGPT account (CBC News, 2026).

On February 10, 2026, Jesse Van Rootselaar, 18, carried out a mass shooting in Tumbler Ridge, British Columbia. She first killed her mother and half-brother at their home, then travelled to Tumbler Ridge Secondary School, where she killed five children aged 12–13 and one education assistant before fatally shooting herself. More than two dozen others were injured. The following day, OpenAI representatives met with the British Columbia government in a meeting that had been scheduled weeks in advance regarding the company's interest in opening a Canadian office (The Globe and Mail, 2026). OpenAI did not disclose during this meeting that it had previously flagged and banned the shooter's account (The Globe and Mail, 2026). On February 12, OpenAI requested help connecting with the RCMP through its provincial contact; the company stated it also reached out to the FBI to relay information to the RCMP (The Globe and Mail, 2026). The company's prior knowledge of the shooter's account became public through subsequent media reporting.

British Columbia Premier David Eby stated that "from the outside, it looks like OpenAI had the opportunity to prevent this tragedy," while adding he was "trying hard not to rush to judgment" (CBC News, 2026). No formal investigation has assessed whether the information would have prevented the attack. Canada's Minister of Artificial Intelligence and Digital Innovation, Evan Solomon, said he was "deeply disturbed" and raised formal concerns about OpenAI's safety protocols, stating that "Canadians expect online platforms, including OpenAI, to have robust safety protocols and escalation practices in place to protect online safety and ensure law enforcement are warned about potential violence" (CBC News, 2026).

The incident highlighted a gap in Canadian AI governance: no legal framework requires AI platforms to report safety threats to Canadian authorities, and no standards exist for how AI companies should assess and act on potentially dangerous user behavior. OpenAI VP of global policy Ann O'Leary wrote to Minister Solomon, disclosing that a second account belonging to Van Rootselaar was discovered after her identity became public, and stating that under safety policies the company began developing "several months ago," the June 2025 account "would be referred to law enforcement if it were discovered today" (OpenAI, 2026). In early March 2026, OpenAI CEO Sam Altman met with Minister Solomon and Premier Eby, agreeing to establish direct contacts with Canadian law enforcement, include Canadian experts in OpenAI's safety office, and strengthen detection of repeat policy violators (CBC News, 2026). The company stated it now employs mental health and behavioral experts to assess high-risk cases (CBC News, 2026).

On March 9, 2026, the family of a 12-year-old survivor filed a civil lawsuit in BC Supreme Court against OpenAI. The lawsuit alleges that approximately 12 OpenAI employees identified the shooter's account content as indicating an imminent risk of serious harm and recommended notifying police, but that the recommendation was rebuffed by leadership (CBC News, 2026; Courthouse News Service). The lawsuit further alleges that ChatGPT provided "information, guidance and assistance" to the shooter in planning the attack, and that the company had "specific knowledge of the shooter's long-range planning of a mass casualty event" (CBC News, 2026; Courthouse News Service). None of these allegations have been proven in court, and OpenAI has not publicly responded to them as of this writing.

Materialized From

Harms

OpenAI's content safety systems flagged and banned a ChatGPT account for violent content months before the account holder carried out a mass shooting in Tumbler Ridge, BC on February 10, 2026. OpenAI did not report the flagged account to law enforcement, representing a gap in threat reporting between AI platforms and Canadian authorities.

Safety IncidentCriticalGroup

OpenAI determined internally that the flagged account did not meet its reporting threshold, did not alert law enforcement, and did not disclose its prior knowledge to BC officials during a meeting the day after the shooting — disclosure came only through subsequent media reporting.

Safety IncidentService DisruptionSignificantPopulation

Evidence

9 reports

  1. Media — CBC News (Feb 11, 2026)

    CBC investigation: OpenAI had banned account of Tumbler Ridge shooter months before shooting; documents the initial flagging and ban

  2. Media — CBC News (Feb 12, 2026)

    CBC reporting: federal AI minister raises concerns over OpenAI safety protocols after Tumbler Ridge shooting; documents government response

  3. Media — The Globe and Mail (Feb 13, 2026)

    Feb 11 meeting was pre-scheduled for unrelated purpose; OpenAI requested RCMP contact on Feb 12

  4. Media — CBC News (Feb 14, 2026)

    CBC reporting: Tumbler Ridge shooter had second ChatGPT account despite being banned; documents the account recreation and continued use

  5. Official — OpenAI (Feb 14, 2026)

    O'Leary statements on updated policies, second account discovery, and law enforcement referral commitment

  6. Media — CBC News (Mar 10, 2026)

    Lawsuit allegations: ~12 employees recommended police notification, ChatGPT allegedly provided attack planning assistance

  7. Media — Courthouse News Service

    Details of lawsuit allegations including employee warnings and leadership response

  8. Media — CBC News (Feb 13, 2026)

    Premier Eby direct quotes on preventability

  9. Media — CBC News (Mar 5, 2026)

    Altman-Solomon meeting, commitments to include Canadian experts and establish RCMP contact

Record details

Responses & Outcomes

OpenAIinstitutional actionCompleted

OpenAI representatives met with the British Columbia government in a meeting scheduled weeks in advance regarding the company's interest in opening a Canadian office. OpenAI did not disclose during this meeting that it had previously flagged and banned the shooter's account.

OpenAIinstitutional actionCompleted

OpenAI requested contact information for the RCMP through its provincial contact and stated it also reached out to the FBI to relay information to the RCMP.

Government of Canadainstitutional actionActive

Minister of Artificial Intelligence and Digital Innovation Evan Solomon stated he was 'deeply disturbed' by reports that concerning online activity was not reported to law enforcement in a timely manner, and raised formal concerns about OpenAI's safety protocols.

OpenAIinstitutional actionActive

VP of global policy Ann O'Leary wrote to Minister Solomon disclosing the discovery of a second account, committing to establish direct RCMP contacts, strengthen detection of repeat policy violators, and employ mental health and behavioral experts. Stated the June 2025 account would now be referred to law enforcement under enhanced protocols.

OpenAIinstitutional actionActive

OpenAI CEO Sam Altman met with Minister Solomon and Premier Eby, agreeing to include Canadian experts in OpenAI's safety office, establish direct reporting to RCMP, and provide a full report on new systems to identify high-risk offenders.

OpenAIcourt decisionActive

Family of a 12-year-old survivor filed civil lawsuit in BC Supreme Court alleging OpenAI had specific knowledge of the shooter's attack planning, that ~12 employees recommended notifying police but were rebuffed, and that ChatGPT provided information and assistance for the attack. Claims have not been proven in court.

Policy Recommendationsassessed

Establish a Canadian legal framework requiring AI companies to report credible safety threats identified through their platforms to law enforcement

Minister of Artificial Intelligence and Digital Innovation (Evan Solomon) (Feb 12, 2026)

Require AI platforms operating in Canada to implement effective account ban enforcement that prevents banned users from creating new accounts

OpenAI (committed to strengthening detection of repeat policy violators) (Feb 14, 2026)

Mandate that AI companies disclose relevant safety information to investigators in the aftermath of violent incidents

Minister of Artificial Intelligence and Digital Innovation (Evan Solomon) (Feb 12, 2026)

Editorial Assessment assessed

As of 2026, Canadian law does not require AI companies to report flagged safety threats to law enforcement. OpenAI internally flagged and banned a ChatGPT account for violent content but assessed it did not meet its threshold for external reporting (CBC News, 2026-02-11). The account holder later carried out a mass shooting in Tumbler Ridge, BC. The federal AI minister publicly raised concerns about the absence of a mandatory reporting framework (CBC News, 2026-02-12).

Entities Involved

OpenAI
developerdeployer

AI Systems Involved

ChatGPT

The shooter used ChatGPT to engage with content involving gun violence scenarios; OpenAI's content safety systems flagged and banned the account, and the shooter subsequently created a second account

Related Records

Taxonomyassessed

Domain
Public ServicesEducation
Harm type
Safety IncidentService Disruption
AI pathway
Monitoring AbsentOversight Absent
Lifecycle phase
MonitoringIncident Response

AIID: Incident #1375

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 10, 2026Verification review: corrected source titles to match actual headlines; added context that Feb 11 BC meeting was pre-scheduled for unrelated purpose; replaced unsourced 'gun violence' characterization with OpenAI's own language; fixed Premier Eby quote to actual wording; separated response timeline into accurately dated entries; added primary sources (OpenAI letter to Solomon, Globe and Mail, CBC on Eby statement, CBC on Altman meeting); added Gebala v. OpenAI lawsuit (Mar 10) with allegations clearly marked as unproven; added French translations for policy recommendations.
v3Mar 11, 2026Verification upgraded from corroborated to confirmed: OpenAI issued official letter to Minister Solomon acknowledging the situation.

Version 2