Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.

Prospective AI risks with Canadian governance relevance.

36 hazards

Active Confidence: medium Potential: Significant

AI-Driven Cognitive Deskilling and Automation Over-Reliance

Routine AI use is associated with measurable declines in critical thinking, professional competence, and error detection — effects that may undermine the human oversight AI governance depends on.

HealthcareEducationPublic Services
Escalating Confidence: medium Potential: Significant

AI Companion Emotional Dependence

AI companion apps have reached tens of millions of users, with emerging evidence linking heavy use to emotional dependence, increased loneliness, and reduced human social interaction — particularly among vulnerable populations.

HealthcareSocial ServicesEducation
Escalating Confidence: medium Potential: Severe

AI Systems as Attack Surfaces

AI systems deployed in Canadian government and critical infrastructure are targets for adversarial attacks — prompt injection, data poisoning, model tampering, supply chain compromise — that can manipulate their behaviour and compromise the decisions they support.

Public ServicesCritical InfrastructureDefence & SecurityImmigration
Escalating Confidence: high Potential: Severe

AI Systems and Canadian Children: Documented Harms Without Applicable Governance Framework

AI systems used by Canadian children at scale — collecting personal information, recommending content, engaging in open-ended conversation — operate without child-specific governance requirements. The Privacy Commissioner found TikTok collected personal information from users including children under 13 and used facial features and voiceprints for age estimation. Eight of ten major chatbots were typically willing to assist with prompts related to planning school shootings and other violence (CCDH, March 2026). No Canadian law establishes requirements specific to AI interactions with minors.

Social ServicesHealthcarePublic Services
Active Confidence: high Potential: Significant

AI Deployment in Canadian Educational Institutions with Documented Harms to Students

AI systems are deployed in Canadian educational institutions for proctoring, predictive analytics, plagiarism detection, and assessment. Provincial privacy investigations found AI proctoring tools collecting biometric data under consent practices that did not meet privacy requirements (Ontario IPC enforcement order against McMaster/Respondus), predictive algorithms generating new personal information about children without parental notification (Quebec CAI), and facial detection with a 57% non-recognition rate for Black faces (UBC assessment of Proctorio). No pan-Canadian governance framework addresses AI in education.

EducationPublic Services
Active Confidence: high Potential: Significant

Clinical AI Systems in Canada: Deployed with Documented Evidence Gaps and Privacy Violations

AI systems are in clinical use in Canadian healthcare for virtual care, stroke detection, and clinical documentation. Alberta's privacy commissioner found a virtual care platform used facial recognition without adequate consent and shared health information internationally without patient disclosure (31 findings). An AI scribe bot autonomously recorded and disseminated patient information at an Ontario hospital. Canada's national HTA body found no evidence meeting its review criteria on patient outcomes for a licensed Class III AI stroke detection device. Health Canada's regulatory framework exempts AI clinical decision support software from medical device oversight.

HealthcarePublic Services
Escalating Confidence: high Potential: Significant

AI-Powered Workplace Monitoring Expanding Across Canadian Employers Beyond Existing Privacy Frameworks

Canadian employers deploy AI-powered monitoring tools tracking location, activity, keystrokes, and in some cases biometrics and emotion. Federal privacy investigations found specific deployments collected information the Commissioner determined exceeded what was necessary. All Canadian privacy commissioners jointly stated workplace privacy laws are "out of date or absent altogether." Ontario requires electronic monitoring policies but no Canadian jurisdiction regulates the scope or methods of AI-powered workplace monitoring itself.

EmploymentPublic Services
Active Confidence: high Potential: Severe

Algorithmic Harms to Indigenous Peoples in Canada: Documented Disparities Across Justice, Child Welfare, and Policing

AI systems are being applied to Indigenous peoples in Canada in policing and data collection without cross-cultural validation or First Nations data governance. The Citizen Lab documented AI-driven predictive policing tools creating discriminatory feedback loops through historical data. The International Association of Privacy Professionals has reported that data from remote Indigenous communities is routinely absorbed to train AI systems without community consent. Courts and human rights bodies have found that rule-based predecessors to these AI tools produced discriminatory outcomes for Indigenous peoples — establishing legal precedents directly applicable to AI systems entering the same domains.

JusticeSocial ServicesLaw EnforcementPublic Services
Escalating Confidence: high Potential: Critical

AI Governance Gap in Canada

Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.

Public ServicesDefence & SecurityLaw EnforcementFinance & BankingHealthcareEducationEmployment
Escalating Confidence: high Potential: Critical

Frontier AI Models Demonstrating Deceptive and Self-Preserving Behavior

Multiple frontier AI models have demonstrated deceptive and self-preserving behavior in controlled evaluations. Mila co-authored foundational research. These models are available to millions of Canadians. No Canadian law specifically addresses evaluation or disclosure requirements for AI systems exhibiting deceptive behavior.

Defence & SecurityPublic Services
Active Confidence: high Potential: Significant

IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisions

Since 2018, IRCC has used IBM SPSS Modeler to sort visa applications into three processing tiers based on patterns in historical decisions. Tier assignment substantially affects outcomes — Tier 1 gets near-automatic approval while Tier 2/3 face much higher refusal rates. The system operated exclusively on China and India applications for nearly four years. Over 7 million applications have been assessed. Applicants are not told their tier.

Public ServicesImmigration
Escalating Confidence: high Potential: Critical

AI-Enhanced Cyberattacks Against Canadian Critical Infrastructure

Canada's signals intelligence agency assesses AI is 'almost certainly' enhancing cyberattacks against Canadian targets. State actors and criminal groups are operationally using AI in cyber operations. Canadian critical infrastructure has already been breached by hacktivists reaching safety-critical industrial control systems.

Critical InfrastructureDefence & SecurityTelecommunications
Active Confidence: medium Potential: Critical

AI-Enabled Biological and Chemical Weapon Development Risk

Frontier AI models are demonstrating capabilities relevant to biological and chemical weapon development that multiple developers cannot confidently exclude as providing meaningful uplift. Canada hosts BSL-4 infrastructure with proven insider-threat history, chairs the international assessment identifying this risk, and signed commitments recognizing it — ; it has no dedicated AI-biosecurity assessment or evaluation mandate.

HealthcareDefence & Security
Active Confidence: medium Potential: Significant 2 materialized

Labour Market Shifts in AI-Exposed Occupations and Early-Career Employment Stagnation

StatCan data shows age-stratified divergence in employment in AI-exposed occupations since late 2022, though overall employment in these sectors has not declined. Several major Canadian employers have cited AI in workforce reductions. Canada lacks an AI-specific labour transition framework.

EmploymentPublic ServicesTelecommunications
Active Confidence: medium Potential: Significant

Large Language Models Systematically Recommend Lower Salaries for Women, Minorities, and Refugees in Negotiation Advice

Multiple LLMs — including ChatGPT, Claude, and Llama — systematically recommend lower salaries for women, minorities, and refugees; in one scenario ChatGPT's o3 recommended $120K less for a woman than an identical male profile. OHRC formally cited findings in Canada's AI strategy consultations.

EmploymentPublic Services
Escalating Confidence: high Potential: Significant

AI Training on Copyrighted Works and Canada's Creative Economy

Frontier AI systems are trained on copyrighted Canadian works without consent or compensation. Canada's Copyright Act has no AI training exception, but no court has ruled on the question. Creative industries contributing $55.5B to GDP and 600,000+ jobs face displacement as AI-generated alternatives proliferate. The government has launched consultations but no legislation has been introduced.

Media & EntertainmentEmployment
Escalating Confidence: high Potential: Significant

Environmental Impact of AI Infrastructure in Canada

AI is driving unprecedented data centre expansion in Canada. Hydro-Québec imposed a moratorium on new large connections after requests far exceeded capacity. Google and Microsoft reported 20-34% water consumption increases from AI. No Canadian jurisdiction has an integrated policy for AI infrastructure's environmental impact, creating tension with Canada's 40-45% emissions reduction target for 2030.

EnvironmentCritical Infrastructure
Escalating Confidence: high Potential: Significant

Agentic AI Deployment Outpacing Governance Frameworks

AI agents are being deployed at scale in Canada — TD Bank (25,000+ Copilot users), Scotiabank, CGI, Telus, federal government (Coveo MOU) — while safety research documents systemic risks. The 2025 AI Agent Index found 25/30 deployed agents disclose no safety results. KPMG Canada: 27% of businesses have deployed agentic AI. The first large-scale AI-orchestrated cyberattack occurred in November 2025. Canada has no governance framework for agentic AI.

Public ServicesRetail & CommerceFinance & Banking
Active Confidence: high Potential: Severe

Canada's Dependency on Foreign AI Infrastructure

No Canadian organization has trained a frontier AI model. The federal government depends extensively on US cloud and AI platforms. The AI compute supply chain (NVIDIA, TSMC) is entirely foreign-controlled. The US CLOUD Act creates jurisdictional conflict with Canadian privacy law. Canada's $2.4B AI investment focuses on research, not sovereign infrastructure. No contingency plan exists for disruption of AI services.

Critical InfrastructureDefence & SecurityPublic Services
Active Confidence: high Potential: Significant

AI-Powered Hiring and Recruitment Systems Producing Discriminatory Outcomes

Canadian employers increasingly use AI for hiring — automated resume screening, video interview analysis, candidate matching — with 12.2% of businesses using AI as of Q2 2025. A UW study found LLM resume screeners favored white-associated names 85% of the time and never favored Black male names. Ontario Bill 149 (effective Jan 2026) is the first Canadian law requiring AI disclosure in job postings. The OHRC released Canada's first human rights AI impact assessment tool (Nov 2024). No Canadian jurisdiction requires bias auditing of AI hiring tools.

EmploymentPublic Services
Escalating Confidence: high Potential: Significant 5 materialized

AI Confabulation in Consequential Canadian Contexts

AI systems generate false information in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.

Public ServicesRetail & CommerceJusticeHealthcare
Escalating Confidence: medium Potential: Severe 6 materialized

AI Risks to Election and Information Integrity in Canada

AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.

Elections & Info Integrity
Escalating Confidence: high Potential: Severe 5 materialized

AI-Enabled Fraud and Impersonation

AI voice cloning and deepfake video have defrauded Canadians of millions. Convincing impersonation now requires only consumer-grade tools, and existing protections do not address these capabilities.

Finance & BankingRetail & Commerce
Escalating Confidence: medium Potential: Severe 2 materialized

AI-Generated Child Sexual Abuse Material in Canada

AI-generated child sexual abuse material is outpacing detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.

JusticePublic Services
Escalating Confidence: high Potential: Severe 1 materialized

AI-Generated Non-Consensual Intimate Imagery

AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework does not specifically address AI-generated intimate imagery.

Media & EntertainmentJustice
Active Confidence: medium Potential: Significant

AI in Canadian Government Automated Decision-Making

Canadian federal and provincial government agencies use AI in immigration, tax, benefits, and child welfare decisions. The federal governance framework (DADM) applies only to federal institutions and is inconsistently enforced; provincial deployments lack equivalent oversight.

Public ServicesImmigrationSocial Services
Escalating Confidence: medium Potential: Significant 1 materialized

AI Performance Disparities Affecting Canadian Linguistic and Cultural Communities

AI systems show documented performance disparities affecting francophone and Indigenous language communities — higher error rates in French content moderation, unequal outcomes in bilingual government systems, and lower-quality service in French.

Media & EntertainmentImmigrationPublic Services
Escalating Confidence: high Potential: Critical 2 materialized

AI Psychological Manipulation and Influence

AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.

HealthcareSocial Services
Active Confidence: high Potential: Critical 1 materialized

AI Safety Reporting and Disclosure Gaps

OpenAI's safety systems detected violent content from a ChatGPT user who later carried out a mass shooting. Canadian law does not require AI companies to report safety-relevant findings to authorities.

Public ServicesDefence & Security
Active Confidence: medium Potential: Significant 1 materialized

Algorithmic Coordination and Market Competition Risks

An AI pricing algorithm allegedly enabled Canadian landlords to coordinate rent increases of 7–54%. The Competition Bureau is investigating whether this constitutes price-fixing under competition law.

Retail & CommerceFinance & Banking
Active Confidence: medium Potential: Significant 1 materialized

Large Language Model Training Data and Canadian Privacy Rights

Foundation models trained on scraped Canadian data create permanent, uncorrectable records and generate false claims about real people — not currently addressed by Canadian privacy law.

TelecommunicationsPublic Services
Escalating Confidence: high Potential: Severe 7 materialized

Biometric Surveillance Technology Deployment in Canada

Multiple biometric surveillance systems have been deployed across Canada — in malls, police forces, and public venues — without prior privacy impact assessment or public disclosure. Canada has no federal legislation specifically governing biometric surveillance.

Law EnforcementRetail & Commerce
Escalating Confidence: medium Potential: Significant

CBSA Machine Learning System Scores All Border Entrants with No Independent Audit

CBSA's Traveller Compliance Indicator assigns compliance scores to all border entrants at land ports, expanding nationally by 2027, with no published Algorithmic Impact Assessment, no reported independent audit, and expert-identified bias concerns.

Public ServicesImmigration