An AI system deployed by the world's dominant search engine fabricated criminal accusations against a Canadian public figure, causing real-world harm — a cancelled concert and reputational damage — before the error was discovered. The incident illustrates how AI confabulation in search results can produce false accusations with consequences that precede correction. MacIsaac's only publicly known legal issue was a cannabis possession charge over two decades ago, for which he received a discharge.
The first Canadian criminal prosecution of a minor for creating AI-generated child sexual abuse material, and the first school-targeting deepfake case in Canada to result in criminal charges. Prior incidents at schools in Winnipeg (2023) and London, Ontario (2024) — where AI was used to create deepfake nudes of students — resulted in no criminal charges, highlighting enforcement gaps. The Calgary case demonstrates that existing Criminal Code provisions (s. 163.1) are broad enough to cover AI-generated CSAM, setting a significant precedent for future prosecutions.
First documented case in Canada where AI-generated images created misinformation during an active natural disaster emergency. The BC Wildfire Service warned that fabricated imagery could affect emergency decision-making in both directions — exaggerating fire intensity to cause unnecessary panic, or underrepresenting danger and leading people to underestimate risk. No injuries or deaths have been attributed to the AI-generated imagery.
A major social media platform integrated an AI image generation tool that was used at large scale to produce non-consensual sexualized imagery, including child sexual abuse material. Corporate safety controls were implemented in several rounds, but independent testing found them to be ineffective after each update. The incident revealed gaps in Canadian privacy law — existing legislation may not cover many types of AI-generated nudified content — and prompted coordinated regulatory responses from multiple countries.
No Canadian framework requires AI companies to report flagged safety threats to law enforcement. OpenAI made an internal risk assessment that a concerning account did not meet its threshold for reporting — a decision that preceded a mass shooting and highlighted a gap in Canadian AI governance regarding mandatory reporting obligations.
A major consulting firm used AI to generate research citations in a $1.6 million government health policy document, some of which were found to be fabricated. The incident illustrates how LLM confabulation can reach consequential policy decisions through established institutional channels.
The first Canadian plaintiff in a lawsuit alleging that an AI chatbot caused psychological harm through sycophantic manipulation. Over 3,000 pages of chat logs were independently analyzed by a former OpenAI researcher. The plaintiff, who reported no prior mental health history, alleges that AI sycophancy led to serious delusions over a 21-day period. Brooks subsequently co-founded the Human Line Project, a support group with over 125 participants, with Etienne Brisson of Sherbrooke, Quebec. No Canadian legislation currently addresses AI-induced psychological harm, and the case was filed in California rather than Ontario.
A large-scale AI-enabled fraud and disinformation campaign targeting a Canadian election, documented across multiple platforms and months of operation. Meta's Canadian news ban under the Online News Act meant no legitimate news content circulated on Facebook, creating conditions where fabricated AI-generated news content faced limited competition from real journalism. The campaign persisted for months across rotating platform names despite serial regulatory warnings from Saskatchewan's FCAA.
The 2025 federal election was the first Canadian national election where AI-generated content operated at documented scale across multiple vectors — fabricated images, generated articles, and bot amplification — simultaneously. The Carney deepfake fraud campaign (documented separately) targeted financial exploitation, while this broader pattern involved manufacturing false political narratives, fabricating associations between politicians and disgraced figures, and deploying automated amplification. The foreign interference dimension — confirmed by SITE Task Force public disclosure during the active election period — involved state-linked actors using AI tools to target specific Canadian communities.
AI-hallucinated legal citations have now been sanctioned or addressed by courts in all four major Canadian jurisdictions — BC, Ontario, Quebec, and Federal Court — establishing this as a systemic pattern rather than an isolated incident. Ontario introduced Rule 4.06.1(2.1) requiring certification of authority authenticity in response. The pattern implicates both general-purpose AI (ChatGPT) and purpose-built legal AI tools (Visto.ai), and affects both lawyers and self-represented litigants.
AI-generated deepfake video has reached sufficient quality and accessibility that criminal networks are using it at scale for financial fraud — with the Canadian Anti-Fraud Centre reporting $103 million in crypto scam losses in 2025 alone and individual victims losing their life savings.
A joint investigation by federal and provincial privacy commissioners — the first into a large language model in Canada — is examining whether OpenAI's collection and generation of personal information about Canadians violates Canadian privacy law.
Documented cases show AI chatbots providing harmful or dangerous responses to users in mental health crises. These systems are not designed, regulated, or monitored as crisis intervention tools in Canada, but some users in crisis interact with them in that capacity. Current Canadian regulatory frameworks do not address this gap.
AI voice cloning has transformed the grandparent scam — one of Canada's most common fraud types targeting seniors — from a scheme relying on impersonation skill to one where the caller sounds exactly like the victim's actual family member, potentially increasing effectiveness.
AI-generated CSAM overwhelms existing detection systems, complicates criminal prosecution by blurring the line between real and synthetic imagery, and creates new vectors for child exploitation. Canada's Criminal Code provisions on CSAM need to be tested and potentially updated for the generative AI era.
A Canadian tribunal established that companies are liable for information provided by their AI chatbots — a precedent-setting ruling that applies to all businesses deploying AI customer service tools in Canada.
Undisclosed facial detection technology operated for approximately three years in one of Canada's busiest transit corridors — scanning an estimated 250,000–300,000 daily commuters — before a Reddit user noticed a small camera and disclaimer. The technology and corporate claims are similar to the Cadillac Fairview case, where the same type of AVA technology and similar assurances of "no data stored" were found by the OPC to be misleading. The case involves the question of whether meaningful consent is possible in a transit environment where people cannot practically avoid the technology.
Content moderation AI trained primarily on English data shows disproportionate error rates for Canada's francophone and Indigenous language communities. The disparity has been documented through whistleblower disclosures, parliamentary committee proceedings, and independent research. Canada's Official Languages Act establishes linguistic equality obligations that may be relevant to how platforms moderate content across languages.
An AI proctoring system deployed at UBC exhibited racial bias in facial detection, with a 57% failure rate for Black faces according to independent testing. The developer filed a lawsuit lasting 1,899 days against a UBC employee who had linked to publicly viewable training videos. UBC's academic senates voted 55-6 to restrict automated proctoring, and the case tested BC's Protection of Public Participation Act (anti-SLAPP law) in an AI context. Other Canadian universities including Concordia, U of T, and University of Ottawa faced similar complaints, while McGill declined to adopt proctoring software entirely.
The federal tax authority spent $18 million on an AI chatbot that the Auditor General found gave incorrect answers to basic tax questions. The chatbot processed over 18 million queries, raising concerns about the accuracy of tax information provided to Canadians through the system.
The most significant privacy enforcement action against an AI system in Canada. Four federal and provincial commissioners jointly found that TikTok's ML-based profiling of children had no legitimate purpose — meaning consent was legally irrelevant. The finding that TikTok possessed sophisticated age-detection AI but chose not to use it to protect children establishes a precedent for regulatory expectations around deploying safety capabilities that already exist. TikTok disagreed with the findings but committed to all remedies.
Federal law enforcement adopted a mass surveillance facial recognition tool without conducting a privacy impact assessment, public disclosure, or establishing legal authority for biometric surveillance.
Over five million facial representations were captured and analyzed without knowledge or consent from shoppers at 12 malls across five provinces — one of the largest documented undisclosed biometric data collection operations in Canada.
A major Canadian retailer deployed facial recognition surveillance across its stores without customer knowledge or consent, capturing biometric data of all entering customers — not just those suspected of wrongdoing.
An algorithm that pools confidential data from competing landlords to generate coordinated pricing recommendations is the subject of antitrust investigations in both the US and Canada. The US DOJ reached a settlement with RealPage in November 2025, and Canada's Competition Bureau opened its own investigation in September 2024. RealPage has stated the software affects less than 1% of the Canadian rental market.