This site is a work-in-progress prototype.

Canadian AI Incident Monitor

A public-interest observatory documenting AI incidents and hazards in Canada. Structured evidence for prevention and accountability.

25 incidents 14 hazards 38 entities 12 systems

Latest incidents

View all →

Hazards

View all →
Escalating Potential: Critical

AI Psychological Manipulation and Influence

AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.

HealthcareSocial Services
Active Potential: Critical

AI Safety Reporting and Disclosure Failures

No Canadian law requires AI companies to report safety-relevant findings to authorities — a gap linked to a mass shooting where OpenAI detected but did not report a threat.

Public ServicesDefence & Security
Escalating Potential: Severe

AI-Generated Child Sexual Abuse Material in Canada

AI-generated child sexual abuse material is overwhelming detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.

JusticePublic Services

How it works

CAIM separates incidents (discrete events where AI caused harm) from hazards (persistent conditions creating risk). Each record carries structured harms, entity involvement with role primitives, governance responses, and a verification status:

Reported Corroborated Confirmed