This site is a work-in-progress prototype.

Persistent conditions creating credible pathways to AI-related harm in Canada.

14 hazards

Escalating Confidence: high Potential: Significant 2 materialized

AI Confabulation in Consequential Canadian Contexts

AI systems present fabricated information as fact in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.

Public ServicesRetail & CommerceJusticeHealthcare
Escalating Confidence: medium Potential: Severe 1 materialized

AI-Generated Child Sexual Abuse Material in Canada

AI-generated child sexual abuse material is overwhelming detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.

JusticePublic Services
Escalating Confidence: medium Potential: Significant

Systematic AI Bias Against Canadian Linguistic and Cultural Minorities

AI systems systematically disadvantage francophone and Indigenous language communities — over-removing French content, producing disparate outcomes, and providing inferior service.

Media & EntertainmentImmigrationPublic Services
Active Confidence: medium Potential: Significant

Large Language Model Training Data and Canadian Privacy Rights

Foundation models trained on scraped Canadian data create permanent, uncorrectable records and generate false claims about real people — beyond the reach of current privacy law.

TelecommunicationsPublic Services
Escalating Confidence: high Potential: Severe 2 materialized

Unregulated Biometric Surveillance Technology Deployment in Canada

Multiple biometric surveillance systems deployed across Canada — in malls, police forces, and public venues — without legal authority or public disclosure.

Law EnforcementRetail & Commerce
Escalating Confidence: medium Potential: Severe 2 materialized

AI Threats to Election and Information Integrity in Canada

AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.

Elections & Info Integrity
Escalating Confidence: high Potential: Severe

AI-Enabled Fraud and Impersonation

AI voice cloning and deepfake video have defrauded Canadians of millions. Convincing impersonation now requires only consumer-grade tools, and protections have not adapted.

Finance & BankingRetail & Commerce
Escalating Confidence: high Potential: Severe

AI-Generated Non-Consensual Intimate Imagery

AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework has significant gaps for AI-generated intimate imagery.

Media & EntertainmentJustice
Active Confidence: medium Potential: Significant

AI in Government Automated Decision-Making Without Transparency

Canadian governments use AI in immigration, tax, benefits, and child welfare decisions — but the governance framework is inconsistently enforced and doesn't cover provincial deployments.

Public ServicesImmigrationSocial Services
Escalating Confidence: high Potential: Critical

AI Psychological Manipulation and Influence

AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.

HealthcareSocial Services
Active Confidence: high Potential: Critical

AI Safety Reporting and Disclosure Failures

No Canadian law requires AI companies to report safety-relevant findings to authorities — a gap linked to a mass shooting where OpenAI detected but did not report a threat.

Public ServicesDefence & Security
Active Confidence: medium Potential: Significant

Algorithmic Coordination Undermining Market Competition

An AI pricing algorithm allegedly enabled Canadian landlords to coordinate rent increases of 7–54% — functionally price-fixing, but outside traditional competition law.

Retail & CommerceFinance & Banking