AI Confabulation in Consequential Canadian Contexts
AI systems present fabricated information as fact in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.
Persistent conditions creating credible pathways to AI-related harm in Canada.
14 hazards
AI systems present fabricated information as fact in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.
AI-generated child sexual abuse material is overwhelming detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.
AI systems systematically disadvantage francophone and Indigenous language communities — over-removing French content, producing disparate outcomes, and providing inferior service.
AI systems in child welfare, healthcare, and crisis intervention are deployed without safety monitoring or incident reporting — and have contributed to deaths.
Foundation models trained on scraped Canadian data create permanent, uncorrectable records and generate false claims about real people — beyond the reach of current privacy law.
Montreal police acquired AI video surveillance with built-in ethnicity and emotion detection — capabilities activable by configuration, without public disclosure or impact assessment.
Multiple biometric surveillance systems deployed across Canada — in malls, police forces, and public venues — without legal authority or public disclosure.
AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.
AI voice cloning and deepfake video have defrauded Canadians of millions. Convincing impersonation now requires only consumer-grade tools, and protections have not adapted.
AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework has significant gaps for AI-generated intimate imagery.
Canadian governments use AI in immigration, tax, benefits, and child welfare decisions — but the governance framework is inconsistently enforced and doesn't cover provincial deployments.
AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.
No Canadian law requires AI companies to report safety-relevant findings to authorities — a gap linked to a mass shooting where OpenAI detected but did not report a threat.
An AI pricing algorithm allegedly enabled Canadian landlords to coordinate rent increases of 7–54% — functionally price-fixing, but outside traditional competition law.