Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.

Canadian AI Incident Monitor

A public-interest observatory documenting AI incidents and hazards in Canada. Structured evidence for prevention and accountability.

35 incidents 36 hazards 85 entities 20 systems

Recent incidents

View all →
Confirmed Critical

Tumbler Ridge Shooter's ChatGPT Account Had Been Flagged and Banned Months Before Attack

OpenAI's safety systems flagged and banned a ChatGPT account for violent content in June 2025. The account holder carried out a mass shooting in Tumbler Ridge, BC in February 2026. OpenAI had not reported the flagged account to law enforcement. The incident prompted federal calls for mandatory AI safety reporting requirements.

Public ServicesEducation
Confirmed Significant

Edmonton Police First to Deploy Facial Recognition Body Cameras; Privacy Commissioner Says Approval Not Obtained

Edmonton Police launched the world's first facial recognition body camera pilot in December 2025, scanning faces against a watch list of 6,341 people in silent mode without real-time field alerts. EPS stated regulation requires submission of a privacy assessment but not prior approval; Alberta's Privacy Commissioner rejected this interpretation.

Law Enforcement

Hazards

View all →
Escalating Potential: Critical

AI Governance Gap in Canada

Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.

Public ServicesDefence & SecurityLaw EnforcementFinance & BankingHealthcareEducationEmployment
Escalating Potential: Critical

Frontier AI Models Demonstrating Deceptive and Self-Preserving Behavior

Multiple frontier AI models have demonstrated deceptive and self-preserving behavior in controlled evaluations. Mila co-authored foundational research. These models are available to millions of Canadians. No Canadian law specifically addresses evaluation or disclosure requirements for AI systems exhibiting deceptive behavior.

Defence & SecurityPublic Services
Escalating Potential: Critical

AI-Enhanced Cyberattacks Against Canadian Critical Infrastructure

Canada's signals intelligence agency assesses AI is 'almost certainly' enhancing cyberattacks against Canadian targets. State actors and criminal groups are operationally using AI in cyber operations. Canadian critical infrastructure has already been breached by hacktivists reaching safety-critical industrial control systems.

Critical InfrastructureDefence & SecurityTelecommunications
Escalating Potential: Critical

AI Psychological Manipulation and Influence

AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.

HealthcareSocial Services
Active Potential: Critical

AI-Enabled Biological and Chemical Weapon Development Risk

Frontier AI models are demonstrating capabilities relevant to biological and chemical weapon development that multiple developers cannot confidently exclude as providing meaningful uplift. Canada hosts BSL-4 infrastructure with proven insider-threat history, chairs the international assessment identifying this risk, and signed commitments recognizing it — ; it has no dedicated AI-biosecurity assessment or evaluation mandate.

HealthcareDefence & Security

How it works

CAIM separates incidents (discrete events where AI caused harm) from hazards (persistent conditions creating risk). Each record carries structured harms, entity involvement with role primitives, governance responses, and a verification status:

Reported Corroborated Confirmed