Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Active Significant Confidence: medium

Canadian federal and provincial government agencies use AI in immigration, tax, benefits, and child welfare decisions. The federal governance framework (DADM) applies only to federal institutions and is inconsistently enforced; provincial deployments lack equivalent oversight.

Identified: January 1, 2018 Last assessed: March 8, 2026

Canadian federal and provincial government agencies are deploying AI and algorithmic tools in decisions about immigration, taxes, benefits, child welfare, and law enforcement — with inconsistent governance, limited transparency, and inadequate recourse for affected individuals.

The federal Directive on Automated Decision-Making (DADM), issued by the Treasury Board in 2019, provides a governance framework: it requires algorithmic impact assessments for automated decisions that affect the rights or interests of Canadians, establishes transparency requirements, and mandates human review mechanisms. However, compliance has been inconsistent. A 2022 Citizen Lab study documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.

Immigration, Refugees and Citizenship Canada (IRCC) has used data analytics to triage immigration applications since 2013, beginning with temporary resident visa backlogs, with machine learning-based triage formally deployed from 2017-2018. By 2024, IRCC's advanced analytics tools — including the "Chinook" case processing system and the "Automated Decision Assistant" — were processing millions of applications annually, sorting them into risk tiers that determine processing speed and scrutiny level. IRCC states that AI does not refuse applications, but the risk tiers materially shape outcomes: low-risk applications may receive streamlined processing (in some categories, positive decisions may be generated without officer review), while high-risk files receive additional scrutiny. Officers are not told how the tiering system works. Applicants have no way to know whether AI triage affected their case. The core concern is training data: if historical decisions contain patterns of refusal correlated with nationality, gender, age, or marital status, the AI reproduces those patterns at scale. Immigration lawyers have reported anecdotal patterns suggesting gender-based bias, including cases where single women were refused with reasons noting they were "young, single, and mobile."

Provincial and municipal government deployments have no equivalent framework. Quebec's Direction de la protection de la jeunesse used a risk assessment tool (SSP) that contributed to a child's death — a provincial deployment with no algorithmic impact assessment requirement. The CRA deployed an $18 million AI chatbot that the Auditor General found answered only 2 of 6 test questions correctly, while processing 18 million taxpayer queries.

Government AI deployment continues to outpace governance capacity. AI now shapes decisions about who gets a visa, who gets benefits, and which children are flagged as at risk — areas where transparency, assessment, and recourse are established governance expectations.

Harms

IRCC's machine-learning triage system has processed over 7 million visa applications since 2018, sorting applicants into risk tiers that materially shape processing outcomes. Tier assignments are invisible to applicants and officers, with limited recourse against algorithmic determinations.

Discrimination & RightsAutonomy UnderminedSignificantPopulation

Citizen Lab documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.

Autonomy UnderminedModerateSector

Quebec's DPJ child welfare risk assessment tool contributed to a child's death, illustrating the consequences when algorithmic tools are deployed in life-safety contexts without adequate oversight.

Safety IncidentCriticalIndividual

Evidence

8 reports

  1. Official — Treasury Board of Canada Secretariat (Apr 1, 2023)

    Federal governance framework for automated decision-making; existing systems must comply by June 2026

  2. Academic — International Bar Association (Jan 1, 2024)

    Comprehensive analysis of IRCC AI use, bias risks, anecdotal reports of gender-based refusal patterns

  3. Official — Immigration, Refugees and Citizenship Canada (Jan 1, 2024)

    IRCC's official description of AI use in immigration processing

  4. Regulatory — Office of the Auditor General of Canada (Mar 19, 2024)

    CRA chatbot accuracy failures documented by Auditor General

  5. Media — Heron Law Offices (Jun 1, 2024)

    Details of IRCC's AI triage system, officer opacity, and positive decisions without officer review

  6. Academic — Citizen Lab (University of Toronto) (Oct 1, 2022)

    Documentation of DADM compliance gaps and AI deployment patterns across federal government

  7. Media — Chaudhary Law Office (Jun 1, 2024)

    Anecdotal reports of single women refused with 'young, single, and mobile' reasoning

  8. Media — Green and Spiegel LLP (May 27, 2025)

    Legal analysis of AI use in Canadian immigration processing; context on algorithmic decision-making in immigration

Record details

Responses & Outcomes

Treasury Board of Canada SecretariatlegislationActive

Issued Directive on Automated Decision-Making establishing algorithmic impact assessment requirements for federal institutions

Immigration, Refugees and Citizenship Canadainstitutional actionActive

Published Artificial Intelligence Strategy describing use of AI in immigration processing and committing to responsible AI principles

Policy Recommendationsassessed

Consistent enforcement of the federal Directive on Automated Decision-Making

Citizen Lab, University of Toronto (Oct 1, 2022)

Provincial equivalents to the DADM for provincial and municipal AI deployments

Citizen Lab, University of Toronto (Oct 1, 2022)

Mandatory algorithmic impact assessment before deploying AI in consequential government decisions

Treasury Board of Canada Secretariat (Apr 1, 2023)

Independent bias audit of IRCC's AI triage systems for demographic bias, with results published

International Bar Association (Jan 1, 2024)

Require IRCC to disclose to applicants when AI triage was used and provide meaningful explanation of risk tier assigned

International Bar Association (Jan 1, 2024)

Require IRCC to demonstrate that AI triage systems do not reproduce historical patterns of discriminatory refusal

International Bar Association (Jan 1, 2024)

Editorial Assessment assessed

Canadian federal and provincial government agencies deploy AI in decisions about immigration, tax, benefits, and child welfare. The federal Directive on Automated Decision-Making provides a governance framework but applies only to federal institutions and is inconsistently enforced. Provincial and municipal deployments operate without equivalent oversight. IRCC's AI triage system processes millions of applications annually. Affected individuals — particularly non-citizens — may have limited capacity to identify or challenge algorithmic influence on their outcomes.

Entities Involved

AI Systems Involved

CRA AI Chatbot

CRA's AI chatbot (Charlie) processed 18 million taxpayer queries; the Auditor General found it answered only 2 of 6 test questions correctly

Related Records

Taxonomyassessed

Domain
Public ServicesImmigrationSocial Services
Harm type
Discrimination & RightsService DisruptionPrivacy & Data Exposure
AI pathway
Deployment ContextOversight AbsentMonitoring AbsentTraining Data Origin
Lifecycle phase
TrainingDeploymentMonitoringProcurement

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 9, 2026Absorbed ircc-immigration-ai-triage-bias hazard — added IRCC entity, Chinook/ADA details, immigration-specific sources (IBA, Heron Law, IRCC AI Strategy), bias evidence, affected populations, June 2026 DADM deadline
v2Mar 10, 2026Added cross-reference to hazard/ircc-algorithmic-visa-triage. Deduplicated inline IRCC narrative — full case detail is now in the dedicated IRCC hazard record. Added link to IRCC hazard.

Version 2