Google AI Overview Falsely Accused Canadian Musician Ashley MacIsaac of Sex Offenses, Leading to Concert Cancellation
Google's AI fabricated criminal accusations against a Canadian musician, causing a concert cancellation.
A public-interest observatory documenting AI incidents and hazards in Canada. Structured evidence for prevention and accountability.
Google's AI fabricated criminal accusations against a Canadian musician, causing a concert cancellation.
AI-generated deepfake nudes of classmates led to the first Canadian criminal charges against a minor for AI CSAM.
Fake AI wildfire images went viral during BC's fire season, risking distorted evacuation decisions.
Grok generated 6,700 non-consensual sexualized images per hour, including images of minors, prompting a Canadian probe.
OpenAI flagged a user's violent content months before a mass shooting but did not alert authorities.
Deloitte's $1.6M health workforce plan cited nonexistent studies, with real researchers denying authorship.
AI systems in child welfare, healthcare, and crisis intervention are deployed without safety monitoring or incident reporting — and have contributed to deaths.
AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.
No Canadian law requires AI companies to report safety-relevant findings to authorities — a gap linked to a mass shooting where OpenAI detected but did not report a threat.
AI-generated child sexual abuse material is overwhelming detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.
Multiple biometric surveillance systems deployed across Canada — in malls, police forces, and public venues — without legal authority or public disclosure.
AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.
CAIM separates incidents (discrete events where AI caused harm) from hazards (persistent conditions creating risk). Each record carries structured harms, entity involvement with role primitives, governance responses, and a verification status: