AI in Canadian Government Automated Decision-Making
Canadian federal and provincial government agencies use AI in immigration, tax, benefits, and child welfare decisions. The federal governance framework (DADM) applies only to federal institutions and is inconsistently enforced; provincial deployments lack equivalent oversight.
Canadian federal and provincial government agencies are deploying AI and algorithmic tools in decisions about immigration, taxes, benefits, child welfare, and law enforcement — with inconsistent governance, limited transparency, and inadequate recourse for affected individuals.
The federal Directive on Automated Decision-Making (DADM), issued by the Treasury Board in 2019, provides a governance framework: it requires algorithmic impact assessments for automated decisions that affect the rights or interests of Canadians, establishes transparency requirements, and mandates human review mechanisms. However, compliance has been inconsistent. A 2022 Citizen Lab study documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.
Immigration, Refugees and Citizenship Canada (IRCC) has used data analytics to triage immigration applications since 2013, beginning with temporary resident visa backlogs, with machine learning-based triage formally deployed from 2017-2018. By 2024, IRCC's advanced analytics tools — including the "Chinook" case processing system and the "Automated Decision Assistant" — were processing millions of applications annually, sorting them into risk tiers that determine processing speed and scrutiny level. IRCC states that AI does not refuse applications, but the risk tiers materially shape outcomes: low-risk applications may receive streamlined processing (in some categories, positive decisions may be generated without officer review), while high-risk files receive additional scrutiny. Officers are not told how the tiering system works. Applicants have no way to know whether AI triage affected their case. The core concern is training data: if historical decisions contain patterns of refusal correlated with nationality, gender, age, or marital status, the AI reproduces those patterns at scale. Immigration lawyers have reported anecdotal patterns suggesting gender-based bias, including cases where single women were refused with reasons noting they were "young, single, and mobile."
Provincial and municipal government deployments have no equivalent framework. Quebec's Direction de la protection de la jeunesse used a risk assessment tool (SSP) that contributed to a child's death — a provincial deployment with no algorithmic impact assessment requirement. The CRA deployed an $18 million AI chatbot that the Auditor General found answered only 2 of 6 test questions correctly, while processing 18 million taxpayer queries.
Government AI deployment continues to outpace governance capacity. AI now shapes decisions about who gets a visa, who gets benefits, and which children are flagged as at risk — areas where transparency, assessment, and recourse are established governance expectations.
Harms
IRCC's machine-learning triage system has processed over 7 million visa applications since 2018, sorting applicants into risk tiers that materially shape processing outcomes. Tier assignments are invisible to applicants and officers, with limited recourse against algorithmic determinations.
Citizen Lab documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.
Quebec's DPJ child welfare risk assessment tool contributed to a child's death, illustrating the consequences when algorithmic tools are deployed in life-safety contexts without adequate oversight.
Evidence
8 reports
- Directive on Automated Decision-Making Primary source
Federal governance framework for automated decision-making; existing systems must comply by June 2026
- Artificial intelligence and Canada's immigration system Primary source
Comprehensive analysis of IRCC AI use, bias risks, anecdotal reports of gender-based refusal patterns
- Artificial Intelligence Strategy - IRCC Primary source
IRCC's official description of AI use in immigration processing
-
CRA chatbot accuracy failures documented by Auditor General
-
Details of IRCC's AI triage system, officer opacity, and positive decisions without officer review
-
Documentation of DADM compliance gaps and AI deployment patterns across federal government
-
Anecdotal reports of single women refused with 'young, single, and mobile' reasoning
-
Legal analysis of AI use in Canadian immigration processing; context on algorithmic decision-making in immigration
Record details
Responses & Outcomes
Issued Directive on Automated Decision-Making establishing algorithmic impact assessment requirements for federal institutions
Published Artificial Intelligence Strategy describing use of AI in immigration processing and committing to responsible AI principles
Policy Recommendationsassessed
Consistent enforcement of the federal Directive on Automated Decision-Making
Citizen Lab, University of Toronto (Oct 1, 2022)Provincial equivalents to the DADM for provincial and municipal AI deployments
Citizen Lab, University of Toronto (Oct 1, 2022)Mandatory algorithmic impact assessment before deploying AI in consequential government decisions
Treasury Board of Canada Secretariat (Apr 1, 2023)Independent bias audit of IRCC's AI triage systems for demographic bias, with results published
International Bar Association (Jan 1, 2024)Require IRCC to disclose to applicants when AI triage was used and provide meaningful explanation of risk tier assigned
International Bar Association (Jan 1, 2024)Require IRCC to demonstrate that AI triage systems do not reproduce historical patterns of discriminatory refusal
International Bar Association (Jan 1, 2024)Editorial Assessment assessed
Canadian federal and provincial government agencies deploy AI in decisions about immigration, tax, benefits, and child welfare. The federal Directive on Automated Decision-Making provides a governance framework but applies only to federal institutions and is inconsistently enforced. Provincial and municipal deployments operate without equivalent oversight. IRCC's AI triage system processes millions of applications annually. Affected individuals — particularly non-citizens — may have limited capacity to identify or challenge algorithmic influence on their outcomes.
Entities Involved
AI Systems Involved
CRA's AI chatbot (Charlie) processed 18 million taxpayer queries; the Auditor General found it answered only 2 of 6 test questions correctly
Related Records
- Auditor General Found CRA's $18-Million AI Chatbot Gave Incorrect Tax Answersrelated
- AI Confabulation in Consequential Canadian Contextsrelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- AI Governance Gap in Canadarelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- Canada's Dependency on Foreign AI Infrastructurerelated
- AI-Powered Hiring and Recruitment Systems Producing Discriminatory Outcomesrelated
- CBSA Machine Learning System Scores All Border Entrants with No Independent Auditrelated
- Algorithmic Harms to Indigenous Peoples in Canada: Documented Disparities Across Justice, Child Welfare, and Policingrelated
- AI Systems as Attack Surfacesrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 9, 2026 | Absorbed ircc-immigration-ai-triage-bias hazard — added IRCC entity, Chinook/ADA details, immigration-specific sources (IBA, Heron Law, IRCC AI Strategy), bias evidence, affected populations, June 2026 DADM deadline |
| v2 | Mar 10, 2026 | Added cross-reference to hazard/ircc-algorithmic-visa-triage. Deduplicated inline IRCC narrative — full case detail is now in the dedicated IRCC hazard record. Added link to IRCC hazard. |