Routine AI use is associated with measurable declines in critical thinking, professional competence, and error detection — effects that may undermine the human oversight AI governance depends on.
HealthcareEducationPublic Services
Escalating Confidence: medium Potential: Significant
AI companion apps have reached tens of millions of users, with emerging evidence linking heavy use to emotional dependence, increased loneliness, and reduced human social interaction — particularly among vulnerable populations.
AI systems deployed in Canadian government and critical infrastructure are targets for adversarial attacks — prompt injection, data poisoning, model tampering, supply chain compromise — that can manipulate their behaviour and compromise the decisions they support.
Public ServicesCritical InfrastructureDefence & SecurityImmigration
AI systems used by Canadian children at scale — collecting personal information, recommending content, engaging in open-ended conversation — operate without child-specific governance requirements. The Privacy Commissioner found TikTok collected personal information from users including children under 13 and used facial features and voiceprints for age estimation. Eight of ten major chatbots were typically willing to assist with prompts related to planning school shootings and other violence (CCDH, March 2026). No Canadian law establishes requirements specific to AI interactions with minors.
AI systems are deployed in Canadian educational institutions for proctoring, predictive analytics, plagiarism detection, and assessment. Provincial privacy investigations found AI proctoring tools collecting biometric data under consent practices that did not meet privacy requirements (Ontario IPC enforcement order against McMaster/Respondus), predictive algorithms generating new personal information about children without parental notification (Quebec CAI), and facial detection with a 57% non-recognition rate for Black faces (UBC assessment of Proctorio). No pan-Canadian governance framework addresses AI in education.
AI systems are in clinical use in Canadian healthcare for virtual care, stroke detection, and clinical documentation. Alberta's privacy commissioner found a virtual care platform used facial recognition without adequate consent and shared health information internationally without patient disclosure (31 findings). An AI scribe bot autonomously recorded and disseminated patient information at an Ontario hospital. Canada's national HTA body found no evidence meeting its review criteria on patient outcomes for a licensed Class III AI stroke detection device. Health Canada's regulatory framework exempts AI clinical decision support software from medical device oversight.
HealthcarePublic Services
Escalating Confidence: high Potential: Significant
Canadian employers deploy AI-powered monitoring tools tracking location, activity, keystrokes, and in some cases biometrics and emotion. Federal privacy investigations found specific deployments collected information the Commissioner determined exceeded what was necessary. All Canadian privacy commissioners jointly stated workplace privacy laws are "out of date or absent altogether." Ontario requires electronic monitoring policies but no Canadian jurisdiction regulates the scope or methods of AI-powered workplace monitoring itself.
AI systems are being applied to Indigenous peoples in Canada in policing and data collection without cross-cultural validation or First Nations data governance. The Citizen Lab documented AI-driven predictive policing tools creating discriminatory feedback loops through historical data. The International Association of Privacy Professionals has reported that data from remote Indigenous communities is routinely absorbed to train AI systems without community consent. Courts and human rights bodies have found that rule-based predecessors to these AI tools produced discriminatory outcomes for Indigenous peoples — establishing legal precedents directly applicable to AI systems entering the same domains.
Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.
Public ServicesDefence & SecurityLaw EnforcementFinance & BankingHealthcareEducationEmployment
Multiple frontier AI models have demonstrated deceptive and self-preserving behavior in controlled evaluations. Mila co-authored foundational research. These models are available to millions of Canadians. No Canadian law specifically addresses evaluation or disclosure requirements for AI systems exhibiting deceptive behavior.
Since 2018, IRCC has used IBM SPSS Modeler to sort visa applications into three processing tiers based on patterns in historical decisions. Tier assignment substantially affects outcomes — Tier 1 gets near-automatic approval while Tier 2/3 face much higher refusal rates. The system operated exclusively on China and India applications for nearly four years. Over 7 million applications have been assessed. Applicants are not told their tier.
Canada's signals intelligence agency assesses AI is 'almost certainly' enhancing cyberattacks against Canadian targets. State actors and criminal groups are operationally using AI in cyber operations. Canadian critical infrastructure has already been breached by hacktivists reaching safety-critical industrial control systems.
Frontier AI models are demonstrating capabilities relevant to biological and chemical weapon development that multiple developers cannot confidently exclude as providing meaningful uplift. Canada hosts BSL-4 infrastructure with proven insider-threat history, chairs the international assessment identifying this risk, and signed commitments recognizing it — ; it has no dedicated AI-biosecurity assessment or evaluation mandate.
HealthcareDefence & Security
Active Confidence: medium Potential: Significant 2 materialized
StatCan data shows age-stratified divergence in employment in AI-exposed occupations since late 2022, though overall employment in these sectors has not declined. Several major Canadian employers have cited AI in workforce reductions. Canada lacks an AI-specific labour transition framework.
Multiple LLMs — including ChatGPT, Claude, and Llama — systematically recommend lower salaries for women, minorities, and refugees; in one scenario ChatGPT's o3 recommended $120K less for a woman than an identical male profile. OHRC formally cited findings in Canada's AI strategy consultations.
EmploymentPublic Services
Escalating Confidence: high Potential: Significant
Frontier AI systems are trained on copyrighted Canadian works without consent or compensation. Canada's Copyright Act has no AI training exception, but no court has ruled on the question. Creative industries contributing $55.5B to GDP and 600,000+ jobs face displacement as AI-generated alternatives proliferate. The government has launched consultations but no legislation has been introduced.
Media & EntertainmentEmployment
Escalating Confidence: high Potential: Significant
AI is driving unprecedented data centre expansion in Canada. Hydro-Québec imposed a moratorium on new large connections after requests far exceeded capacity. Google and Microsoft reported 20-34% water consumption increases from AI. No Canadian jurisdiction has an integrated policy for AI infrastructure's environmental impact, creating tension with Canada's 40-45% emissions reduction target for 2030.
EnvironmentCritical Infrastructure
Escalating Confidence: high Potential: Significant
AI agents are being deployed at scale in Canada — TD Bank (25,000+ Copilot users), Scotiabank, CGI, Telus, federal government (Coveo MOU) — while safety research documents systemic risks. The 2025 AI Agent Index found 25/30 deployed agents disclose no safety results. KPMG Canada: 27% of businesses have deployed agentic AI. The first large-scale AI-orchestrated cyberattack occurred in November 2025. Canada has no governance framework for agentic AI.
No Canadian organization has trained a frontier AI model. The federal government depends extensively on US cloud and AI platforms. The AI compute supply chain (NVIDIA, TSMC) is entirely foreign-controlled. The US CLOUD Act creates jurisdictional conflict with Canadian privacy law. Canada's $2.4B AI investment focuses on research, not sovereign infrastructure. No contingency plan exists for disruption of AI services.
Canadian employers increasingly use AI for hiring — automated resume screening, video interview analysis, candidate matching — with 12.2% of businesses using AI as of Q2 2025. A UW study found LLM resume screeners favored white-associated names 85% of the time and never favored Black male names. Ontario Bill 149 (effective Jan 2026) is the first Canadian law requiring AI disclosure in job postings. The OHRC released Canada's first human rights AI impact assessment tool (Nov 2024). No Canadian jurisdiction requires bias auditing of AI hiring tools.
EmploymentPublic Services
Escalating Confidence: high Potential: Significant 5 materialized
AI systems generate false information in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.
Public ServicesRetail & CommerceJusticeHealthcare
Escalating Confidence: medium Potential: Severe 6 materialized
AI-generated disinformation appeared at scale in the 2025 federal election. Canadian electoral law has no framework for synthetic media, and detection capacity is minimal.
Elections & Info Integrity
Escalating Confidence: high Potential: Severe 5 materialized
AI voice cloning and deepfake video have defrauded Canadians of millions. Convincing impersonation now requires only consumer-grade tools, and existing protections do not address these capabilities.
Finance & BankingRetail & Commerce
Escalating Confidence: medium Potential: Severe 2 materialized
AI-generated child sexual abuse material is outpacing detection systems and creating legal ambiguity, with direct implications for Canadian law enforcement and child protection.
JusticePublic Services
Escalating Confidence: high Potential: Severe 1 materialized
AI platforms have generated millions of non-consensual sexualized images — including of minors. Canada's legal framework does not specifically address AI-generated intimate imagery.
Canadian federal and provincial government agencies use AI in immigration, tax, benefits, and child welfare decisions. The federal governance framework (DADM) applies only to federal institutions and is inconsistently enforced; provincial deployments lack equivalent oversight.
Public ServicesImmigrationSocial Services
Escalating Confidence: medium Potential: Significant 1 materialized
AI systems show documented performance disparities affecting francophone and Indigenous language communities — higher error rates in French content moderation, unequal outcomes in bilingual government systems, and lower-quality service in French.
Media & EntertainmentImmigrationPublic Services
Escalating Confidence: high Potential: Critical 2 materialized
AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.
HealthcareSocial Services
Active Confidence: high Potential: Critical 1 materialized
OpenAI's safety systems detected violent content from a ChatGPT user who later carried out a mass shooting. Canadian law does not require AI companies to report safety-relevant findings to authorities.
Public ServicesDefence & Security
Active Confidence: medium Potential: Significant 1 materialized
An AI pricing algorithm allegedly enabled Canadian landlords to coordinate rent increases of 7–54%. The Competition Bureau is investigating whether this constitutes price-fixing under competition law.
Retail & CommerceFinance & Banking
Active Confidence: medium Potential: Significant 1 materialized
Foundation models trained on scraped Canadian data create permanent, uncorrectable records and generate false claims about real people — not currently addressed by Canadian privacy law.
Montreal police acquired AI video surveillance with built-in ethnicity and emotion detection — capabilities activable by configuration, without public disclosure or impact assessment.
Law Enforcement
Escalating Confidence: high Potential: Severe 7 materialized
Multiple biometric surveillance systems have been deployed across Canada — in malls, police forces, and public venues — without prior privacy impact assessment or public disclosure. Canada has no federal legislation specifically governing biometric surveillance.
Canada's policy commits to appropriate human involvement in lethal force, but allied militaries are deploying AI targeting systems (Maven, Lavender) that compress decision cycles from weeks to minutes. No framework exists for CAF to manage this gap in coalition operations.
Defence & Security
Escalating Confidence: medium Potential: Significant
CBSA's Traveller Compliance Indicator assigns compliance scores to all border entrants at land ports, expanding nationally by 2027, with no published Algorithmic Impact Assessment, no reported independent audit, and expert-identified bias concerns.
CSE assessed in its 2025 democratic threat update that the PRC likely has the ability and intent to use machine learning to produce detailed intelligence profiles of potential targets connected to democratic processes, including voters, politicians, media, public servants, and activists.