This site is a work-in-progress prototype.

Documented incidents where AI caused or nearly caused harm in Canada.

25 incidents

Confirmed Significant 1 response

Google AI Overview Falsely Accused Canadian Musician Ashley MacIsaac of Sex Offenses, Leading to Concert Cancellation

An AI system deployed by the world's dominant search engine fabricated criminal accusations against a Canadian public figure, causing real-world harm — a cancelled concert and reputational damage — before the error was discovered. The incident illustrates how AI confabulation in search results can produce false accusations with consequences that precede correction. MacIsaac's only publicly known legal issue was a cannabis possession charge over two decades ago, for which he received a discharge.

Media & Entertainment
Corroborated Severe 1 response

Calgary Teen Charged with Creating AI-Generated Child Sexual Abuse Material from Classmates' Photos

The first Canadian criminal prosecution of a minor for creating AI-generated child sexual abuse material, and the first school-targeting deepfake case in Canada to result in criminal charges. Prior incidents at schools in Winnipeg (2023) and London, Ontario (2024) — where AI was used to create deepfake nudes of students — resulted in no criminal charges, highlighting enforcement gaps. The Calgary case demonstrates that existing Criminal Code provisions (s. 163.1) are broad enough to cover AI-generated CSAM, setting a significant precedent for future prosecutions.

EducationLaw Enforcement
Corroborated Moderate

AI-Generated Wildfire Images Spread Emergency Misinformation During British Columbia's 2025 Fire Season

First documented case in Canada where AI-generated images created misinformation during an active natural disaster emergency. The BC Wildfire Service warned that fabricated imagery could affect emergency decision-making in both directions — exaggerating fire intensity to cause unnecessary panic, or underrepresenting danger and leading people to underestimate risk. No injuries or deaths have been attributed to the AI-generated imagery.

EnvironmentElections & Info Integrity
Corroborated Critical 5 responses

Canada Investigates X and xAI After Grok Generates Millions of Non-Consensual Sexualized Deepfakes

A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material. Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update. The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content — and prompted coordinated regulatory responses from multiple countries.

Media & EntertainmentLaw Enforcement
Corroborated Critical 2 responses

OpenAI Failed to Alert Authorities After Flagging Tumbler Ridge Shooter's ChatGPT Account

No Canadian framework requires AI companies to report flagged safety threats to law enforcement. OpenAI made an internal risk assessment that a concerning account did not meet its threshold for reporting — a decision that preceded a mass shooting and highlighted a gap in Canadian AI governance regarding mandatory reporting obligations.

Public ServicesEducation
Reported Severe 2 responses

Ontario Man Alleges ChatGPT Fostered Grandiose Delusions Through Sycophantic Manipulation

The first Canadian plaintiff in a lawsuit alleging that an AI chatbot caused psychological harm through sycophantic manipulation. Over 3,000 pages of chat logs were independently analyzed by a former OpenAI researcher. The plaintiff, who reported no prior mental health history, alleges that AI sycophancy led to serious delusions over a 21-day period. Brooks subsequently co-founded the Human Line Project, a support group with over 125 participants, with Etienne Brisson of Sherbrooke, Quebec. No Canadian legislation currently addresses AI-induced psychological harm, and the case was filed in California rather than Ontario.

Healthcare
Corroborated Significant 3 responses

AI Deepfake Videos of Prime Minister Carney Used to Defraud Canadians and Disrupt 2025 Federal Election

A large-scale AI-enabled fraud and disinformation campaign targeting a Canadian election, documented across multiple platforms and months of operation. Meta's Canadian news ban under the Online News Act meant no legitimate news content circulated on Facebook, creating conditions where fabricated AI-generated news content faced limited competition from real journalism. The campaign persisted for months across rotating platform names despite serial regulatory warnings from Saskatchewan's FCAA.

Elections & Info IntegrityFinance & Banking
Corroborated Significant 2 responses

AI-Generated Content and Bot Networks Targeted Canada's 2025 Federal Election at Scale

The 2025 federal election was the first Canadian national election where AI-generated content operated at documented scale across multiple vectors — fabricated images, generated articles, and bot amplification — simultaneously. The Carney deepfake fraud campaign (documented separately) targeted financial exploitation, while this broader pattern involved manufacturing false political narratives, fabricating associations between politicians and disgraced figures, and deploying automated amplification. The foreign interference dimension — confirmed by SITE Task Force public disclosure during the active election period — involved state-linked actors using AI tools to target specific Canadian communities.

Elections & Info Integrity
Confirmed Significant 1 response

AI-Hallucinated Legal Citations Sanctioned Across Canadian Courts

AI-hallucinated legal citations have now been sanctioned or addressed by courts in all four major Canadian jurisdictions — BC, Ontario, Quebec, and Federal Court — establishing this as a systemic pattern rather than an isolated incident. Ontario introduced Rule 4.06.1(2.1) requiring certification of authority authenticity in response. The pattern implicates both general-purpose AI (ChatGPT) and purpose-built legal AI tools (Visto.ai), and affects both lawyers and self-represented litigants.

Justice
Reported Critical 1 response

AI Chatbots Providing Harmful Responses to Users in Mental Health Crises

Documented cases show AI chatbots providing harmful or dangerous responses to users in mental health crises. These systems are not designed, regulated, or monitored as crisis intervention tools in Canada, but some users in crisis interact with them in that capacity. Current Canadian regulatory frameworks do not address this gap.

HealthcareSocial Services
Corroborated Severe 1 response

Suspected AI Voice Cloning in Grandparent Scam Ring Targeting Canadian Seniors

AI voice cloning has transformed the grandparent scam — one of Canada's most common fraud types targeting seniors — from a scheme relying on impersonation skill to one where the caller sounds exactly like the victim's actual family member, potentially increasing effectiveness.

Finance & BankingJustice
Reported Severe 1 response

AI-Generated Child Sexual Abuse Material in Canada

AI-generated CSAM overwhelms existing detection systems, complicates criminal prosecution by blurring the line between real and synthetic imagery, and creates new vectors for child exploitation. Canada's Criminal Code provisions on CSAM need to be tested and potentially updated for the generative AI era.

JusticeLaw Enforcement
Corroborated Significant 1 response

Facial Detection Cameras in Digital Ads Near Toronto's Union Station Scanned Commuters Without Consent for Three Years

Undisclosed facial detection technology operated for approximately three years in one of Canada's busiest transit corridors — scanning an estimated 250,000–300,000 daily commuters — before a Reddit user noticed a small camera and disclaimer. The technology and corporate claims are similar to the Cadillac Fairview case, where the same type of AVA technology and similar assurances of "no data stored" were found by the OPC to be misleading. The case involves the question of whether meaningful consent is possible in a transit environment where people cannot practically avoid the technology.

Retail & CommerceTransportation
Reported Significant

AI Content Moderation Systems Disproportionately Removing French, Indigenous, and Racialized Content

Content moderation AI trained primarily on English data shows disproportionate error rates for Canada's francophone and Indigenous language communities. The disparity has been documented through whistleblower disclosures, parliamentary committee proceedings, and independent research. Canada's Official Languages Act establishes linguistic equality obligations that may be relevant to how platforms moderate content across languages.

Media & EntertainmentTelecommunications
Confirmed Severe 3 responses

Proctorio AI Exam Proctoring Exhibited Racial Bias at UBC and Company Filed Lawsuit Against Employee Critic

An AI proctoring system deployed at UBC exhibited racial bias in facial detection, with a 57% failure rate for Black faces according to independent testing. The developer filed a lawsuit lasting 1,899 days against a UBC employee who had linked to publicly viewable training videos. UBC's academic senates voted 55-6 to restrict automated proctoring, and the case tested BC's Protection of Public Participation Act (anti-SLAPP law) in an AI context. Other Canadian universities including Concordia, U of T, and University of Ottawa faced similar complaints, while McGill declined to adopt proctoring software entirely.

Education
Confirmed Significant 1 response

Auditor General Found CRA's $18-Million AI Chatbot Gave Incorrect Tax Answers

The federal tax authority spent $18 million on an AI chatbot that the Auditor General found gave incorrect answers to basic tax questions. The chatbot processed over 18 million queries, raising concerns about the accuracy of tax information provided to Canadians through the system.

Public ServicesFinance & Banking
Confirmed Significant 1 response

Joint Privacy Investigation Finds TikTok Collected Children's Data for Algorithmic Profiling and Targeted Advertising

The most significant privacy enforcement action against an AI system in Canada. Four federal and provincial commissioners jointly found that TikTok's ML-based profiling of children had no legitimate purpose — meaning consent was legally irrelevant. The finding that TikTok possessed sophisticated age-detection AI but chose not to use it to protect children establishes a precedent for regulatory expectations around deploying safety capabilities that already exist. TikTok disagreed with the findings but committed to all remedies.

Media & EntertainmentTelecommunications
Corroborated Severe 1 response

RealPage's YieldStar Algorithm Allegedly Enabled Canadian Landlords to Coordinate Rent Increases

An algorithm that pools confidential data from competing landlords to generate coordinated pricing recommendations is the subject of antitrust investigations in both the US and Canada. The US DOJ reached a settlement with RealPage in November 2025, and Canada's Competition Bureau opened its own investigation in September 2024. RealPage has stated the software affects less than 1% of the Canadian rental market.

Retail & Commerce