IA dans la prise de décisions automatisée gouvernementale au Canada
Les gouvernements canadiens utilisent l'IA dans les décisions d'immigration, d'impôts, de prestations et de protection de la jeunesse — mais le cadre de gouvernance est appliqué de manière incohérente et ne couvre pas les déploiements provinciaux.
Canadian federal and provincial government agencies are deploying AI and algorithmic tools in decisions about immigration, taxes, benefits, child welfare, and law enforcement — with inconsistent governance, limited transparency, and inadequate recourse for affected individuals.
The federal Directive on Automated Decision-Making (DADM), issued by the Treasury Board in 2019, provides a governance framework: it requires algorithmic impact assessments for automated decisions that affect the rights or interests of Canadians, establishes transparency requirements, and mandates human review mechanisms. However, compliance has been inconsistent. A 2022 Citizen Lab study documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.
Immigration, Refugees and Citizenship Canada (IRCC) has used data analytics to triage immigration applications since 2013, beginning with temporary resident visa backlogs, with machine learning-based triage formally deployed from 2017-2018. By 2024, IRCC's advanced analytics tools — including the "Chinook" case processing system and the "Automated Decision Assistant" — were processing millions of applications annually, sorting them into risk tiers that determine processing speed and scrutiny level. IRCC states that AI does not refuse applications, but the risk tiers materially shape outcomes: low-risk applications may receive streamlined processing (in some categories, positive decisions may be generated without officer review), while high-risk files receive additional scrutiny. Officers are not told how the tiering system works. Applicants have no way to know whether AI triage affected their case. The core concern is training data: if historical decisions contain patterns of refusal correlated with nationality, gender, age, or marital status, the AI reproduces those patterns at scale. Immigration lawyers have reported anecdotal patterns suggesting gender-based bias, including cases where single women were refused with reasons noting they were "young, single, and mobile."
Provincial and municipal government deployments have no equivalent framework. Quebec's Direction de la protection de la jeunesse used a risk assessment tool (SSP) that contributed to a child's death — a provincial deployment with no algorithmic impact assessment requirement. The CRA deployed an $18 million AI chatbot that the Auditor General found answered only 2 of 6 test questions correctly, while processing 18 million taxpayer queries.
Government AI deployment continues to outpace governance capacity. AI now shapes decisions about who gets a visa, who gets benefits, and which children are flagged as at risk — areas where transparency, assessment, and recourse are established governance expectations.
Préjudices
Le système de triage par apprentissage automatique d'IRCC a traité plus de 7 millions de demandes de visa depuis 2018, triant les demandeurs en niveaux de risque qui influencent matériellement les résultats de traitement. Les affectations de niveau sont invisibles pour les demandeurs et les agents, avec un recours limité contre les déterminations algorithmiques.
Le Citizen Lab a documenté des lacunes dans la mise en œuvre de la DPDA dans les institutions fédérales — des ministères déployant des outils d'IA sans compléter les évaluations d'impact, ou catégorisant les systèmes de manière à minimiser les exigences de gouvernance.
L'outil d'évaluation des risques en protection de la jeunesse du DPJ au Québec a contribué au décès d'un enfant, illustrant les conséquences du déploiement d'outils algorithmiques dans des contextes de sécurité des personnes sans surveillance adéquate.
Preuves
8 rapports
- Directive on Automated Decision-Making Source principale
Federal governance framework for automated decision-making; existing systems must comply by June 2026
- Artificial intelligence and Canada's immigration system Source principale
Comprehensive analysis of IRCC AI use, bias risks, anecdotal reports of gender-based refusal patterns
- Artificial Intelligence Strategy - IRCC Source principale
IRCC's official description of AI use in immigration processing
-
CRA chatbot accuracy failures documented by Auditor General
- IRCC Lifts the Lid (a Bit) on their AI-based TRV Triaging Process Source principale
Details of IRCC's AI triage system, officer opacity, and positive decisions without officer review
-
Documentation of DADM compliance gaps and AI deployment patterns across federal government
-
Anecdotal reports of single women refused with 'young, single, and mobile' reasoning
-
Legal analysis of AI use in Canadian immigration processing; context on algorithmic decision-making in immigration
Détails de la fiche
Réponses et résultats
A émis la Directive sur la prise de décisions automatisée établissant des exigences d'évaluation d'impact algorithmique pour les institutions fédérales
A publié la Stratégie d'intelligence artificielle décrivant l'utilisation de l'IA dans le traitement de l'immigration et s'engageant envers des principes d'IA responsable
Recommandations de politiqueévalué
Consistent enforcement of the federal Directive on Automated Decision-Making
Citizen Lab, University of Toronto (1 oct. 2022)Provincial equivalents to the DADM for provincial and municipal AI deployments
Citizen Lab, University of Toronto (1 oct. 2022)Mandatory algorithmic impact assessment before deploying AI in consequential government decisions
Treasury Board of Canada Secretariat (1 avr. 2023)Independent bias audit of IRCC's AI triage systems for demographic bias, with results published
International Bar Association (1 janv. 2024)Require IRCC to disclose to applicants when AI triage was used and provide meaningful explanation of risk tier assigned
International Bar Association (1 janv. 2024)Require IRCC to demonstrate that AI triage systems do not reproduce historical patterns of discriminatory refusal
International Bar Association (1 janv. 2024)Évaluation éditoriale évalué
Les agences gouvernementales canadiennes déploient l'IA dans les décisions sur l'immigration, les impôts, les prestations et la protection de la jeunesse — mais le cadre de gouvernance (la Directive) est appliqué de manière incohérente, ne s'applique qu'aux institutions fédérales et ne couvre pas les déploiements provinciaux ou municipaux. Le système de tri par IA d'IRCC affecte des millions de demandes annuellement, provenant de personnes sans statut juridique canadien pour contester le processus.
Entités impliquées
Systèmes d'IA impliqués
CRA's AI chatbot (Charlie) processed 18 million taxpayer queries; the Auditor General found it answered only 2 of 6 test questions correctly
Fiches connexes
- Auditor General Found CRA's $18-Million AI Chatbot Gave Incorrect Tax Answersrelated
- AI Confabulation in Consequential Canadian Contextsrelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- AI Governance Gap in Canadarelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- Canada's Dependency on Foreign AI Infrastructurerelated
- AI-Powered Hiring and Recruitment Systems Producing Discriminatory Outcomesrelated
- CBSA Machine Learning System Scores All Border Entrants with No Independent Auditrelated
- Algorithmic Harms to Indigenous Peoples in Canada: Documented Disparities Across Justice, Child Welfare, and Policingrelated
- AI Systems as Attack Surfacesrelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |
| v2 | 9 mars 2026 | Absorbed ircc-immigration-ai-triage-bias hazard — added IRCC entity, Chinook/ADA details, immigration-specific sources (IBA, Heron Law, IRCC AI Strategy), bias evidence, affected populations, June 2026 DADM deadline |
| v2 | 10 mars 2026 | Added cross-reference to hazard/ircc-algorithmic-visa-triage. Deduplicated inline IRCC narrative — full case detail is now in the dedicated IRCC hazard record. Added link to IRCC hazard. |