Phase pilote
Actif Confiance : medium Sévérité potentielle : Important Version 2

Les agences gouvernementales canadiennes déploient l'IA dans les décisions sur l'immigration, les impôts, les prestations et la protection de la jeunesse — mais le cadre de gouvernance (la Directive) est appliqué de manière incohérente, ne s'applique qu'aux institutions fédérales et ne couvre pas les déploiements provinciaux ou municipaux. Le système de tri par IA d'IRCC est l'un des systèmes de prise de décisions automatisée à plus forts enjeux au Canada, affectant des millions de demandes annuellement.

Identifié : 1 janvier 2018 Dernière évaluation : 8 mars 2026

Description

Canadian federal and provincial government agencies are deploying AI and algorithmic tools in decisions about immigration, taxes, benefits, child welfare, and law enforcement — with inconsistent governance, limited transparency, and inadequate recourse for affected individuals.

The federal Directive on Automated Decision-Making (DADM), issued by the Treasury Board in 2019, provides a governance framework: it requires algorithmic impact assessments for automated decisions that affect the rights or interests of Canadians, establishes transparency requirements, and mandates human review mechanisms. However, compliance has been inconsistent. A 2022 Citizen Lab study documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.

Immigration, Refugees and Citizenship Canada (IRCC) has used machine learning systems to triage immigration applications since 2013, beginning with temporary resident visa backlogs. By 2024, IRCC’s advanced analytics tools — including the “Chinook” case processing system and the “Automated Decision Assistant” — were processing millions of applications annually, sorting them into risk tiers that determine processing speed and scrutiny level. IRCC states that AI does not refuse applications, but the risk tiers materially shape outcomes: low-risk applications may receive streamlined processing (in some categories, positive decisions may be generated without officer review), while high-risk files receive additional scrutiny. Officers are not told how the tiering system works. Applicants have no way to know whether AI triage affected their case. The core concern is training data: if historical decisions contain patterns of refusal correlated with nationality, gender, age, or marital status, the AI reproduces those patterns at scale. Immigration lawyers have reported anecdotal patterns suggesting gender-based bias, including cases where single women were refused with reasons noting they were “young, single, and mobile.”

Provincial and municipal government deployments have no equivalent framework. Quebec’s Direction de la protection de la jeunesse used a risk assessment tool (SSP) that contributed to a child’s death — a provincial deployment with no algorithmic impact assessment requirement. The CRA deployed an $18 million AI chatbot that the Auditor General found answered only 2 of 6 test questions correctly, while processing 18 million taxpayer queries.

The structural condition is the gap between governance framework design and governance framework implementation. Canada was an early adopter of algorithmic impact assessment requirements at the federal level. But the directive’s inconsistent enforcement, combined with the absence of provincial and municipal equivalents, means that government AI deployment continues to outpace governance capacity. Existing systems developed before June 2025 have until June 2026 to comply with updated DADM requirements — a deadline that may reveal the extent of non-compliance. When AI shapes decisions about fundamental rights — who gets a visa, who gets benefits, which children are flagged as at risk — without adequate transparency, assessment, or recourse, the accountability infrastructure that democratic governance depends on is eroded.

Voie de risque

Les agences gouvernementales fédérales et provinciales canadiennes déploient des outils d'IA et algorithmiques dans des contextes de prise de décisions conséquents — traitement de l'immigration, admissibilité aux prestations, évaluation des risques, protection de la jeunesse — sans transparence adéquate, évaluation de l'impact algorithmique, ni recours significatif pour les personnes concernées. IRCC utilise des systèmes de tri par apprentissage automatique depuis 2013 pour classer les demandes d'immigration par niveaux de risque, entraînés sur des données de décisions historiques pouvant encoder des schémas discriminatoires. La Directive sur la prise de décisions automatisée exige des évaluations d'impact algorithmique, mais la conformité est incohérente et la directive ne s'applique qu'aux institutions fédérales.

Historique des évaluations

Actif Confiance : medium Important

La Directive fédérale existe mais sa conformité est documentée comme incohérente. L'ARC a déployé un robot conversationnel avec des défaillances documentées. IRCC utilise l'IA/AA pour le tri de l'immigration depuis 2013, traitant des millions de demandes; de multiples sources documentent l'entraînement sur des données historiques avec des schémas discriminatoires potentiels, l'opacité du système, et des rapports anecdotiques de biais. Les systèmes existants ont jusqu'en juin 2026 pour se conformer.

Initial assessment. Status active — governance framework exists at federal level but implementation is inconsistent, and provincial/municipal gaps are established. Includes absorbed IRCC immigration AI triage evidence. IRCC compliance deadline (June 2026) may trigger status change.

Déclencheurs

  • Cost and efficiency pressure driving AI adoption in government services
  • Growing processing volumes making meaningful human review structurally difficult
  • AI companies marketing government solutions without safety evaluation requirements
  • Provincial and municipal adoption without DADM-equivalent frameworks
  • Rising immigration application volumes increasing reliance on automated triage
  • Historical immigration decision data used for training encoding past discriminatory patterns
  • Officer deference to AI risk tier assignments (automation bias)
  • June 2026 DADM compliance deadline approaching for existing systems

Facteurs atténuants

  • DADM providing a governance framework at the federal level
  • Auditor General scrutiny of government AI deployments
  • Citizen Lab and academic research documenting compliance gaps
  • Parliamentary interest in government AI use
  • IRCC policy that AI does not make negative decisions (officer review required for refusals)
  • Treasury Board requirement for compliance of existing systems by June 2026
  • Growing scrutiny from legal community and immigration law researchers

Contrôles de risque

  • Consistent enforcement of the federal Directive on Automated Decision-Making
  • Provincial equivalents to the DADM for provincial and municipal AI deployments
  • Mandatory algorithmic impact assessment before deploying AI in consequential government decisions
  • Transparency requirements including public disclosure of AI systems used in government decision-making
  • Meaningful recourse mechanisms for individuals affected by algorithmic government decisions
  • Auditing and revision requirements for AI tools in government decision-making
  • Independent bias audit of IRCC's AI triage systems for demographic bias, with results published
  • Require IRCC to disclose to applicants when AI triage was used and provide meaningful explanation of risk tier assigned
  • Ensure AI triage systems are tested for bias across protected grounds before deployment and at regular intervals
  • Require IRCC to demonstrate that AI triage systems do not reproduce historical patterns of discriminatory refusal

Populations touchées

  • Immigration applicants subject to algorithmic processing (millions annually)
  • Temporary resident visa applicants from countries with higher historical refusal rates
  • Single women immigration applicants subject to potential gender-correlated bias
  • Applicants from Global South countries
  • Refugee and asylum claimants processed with AI assistance
  • Benefits claimants whose eligibility is determined with AI assistance
  • Individuals subject to government risk assessments
  • All Canadians interacting with government AI systems

Entités impliquées

A émis la Directive sur la prise de décisions automatisée; responsable du cadre de gouvernance de l'IA fédérale

A déployé un robot conversationnel IA de 18 M$ traitant 18 millions de requêtes avec des défaillances d'exactitude documentées

Déploie des systèmes de tri par IA/AA pour le traitement des demandes d'immigration depuis 2013; utilise des analyses avancées et l'outil Chinook pour classer les demandes par niveau de risque; les agents ne sont pas informés du fonctionnement du tri

Le Commissariat à la protection de la vie privée a compétence sur la collecte et l'utilisation des renseignements personnels par IRCC dans les systèmes d'IA

Réponses

Secrétariat du Conseil du Trésor du Canada

A émis la Directive sur la prise de décisions automatisée établissant des exigences d'évaluation d'impact algorithmique pour les institutions fédérales

Immigration, Réfugiés et Citoyenneté Canada

A publié la Stratégie d'intelligence artificielle décrivant l'utilisation de l'IA dans le traitement de l'immigration et s'engageant envers des principes d'IA responsable

Fiches connexes

Taxonomie

Domaine
Services publicsImmigrationServices sociaux
Type de préjudice
Discrimination et droitsDéfaillance opérationnelleVie privée et données
Implication de l'IA
Défaillance de déploiementDéfaillance de supervisionLacune de surveillanceDonnées d'entraînement
Phase du cycle de vie
EntraînementDéploiementSurveillanceApprovisionnement

Sources

  1. Directive on Automated Decision-Making Officiel — Treasury Board of Canada Secretariat (1 avr. 2023)
  2. Report 3 — Processing of Benefit and Credit Applications — Canada Revenue Agency Réglementaire — Office of the Auditor General of Canada (19 mars 2024)
  3. Automated Decision-Making in the Canadian Federal Government Académique — Citizen Lab (University of Toronto) (1 oct. 2022)
  4. Artificial intelligence and Canada's immigration system Académique — International Bar Association (1 janv. 2024)
  5. IRCC Lifts the Lid (a Bit) on their AI-based TRV Triaging Process Média — Heron Law Offices (1 juin 2024)
  6. Artificial Intelligence Strategy - IRCC Officiel — Immigration, Refugees and Citizenship Canada (1 janv. 2024)
  7. Use of AI in Canadian Immigration Média — Green and Spiegel LLP (27 mai 2025)
  8. IRCC AI in Canadian Immigration: Efficiency, Privacy, and Bias Média — Chaudhary Law Office (1 juin 2024)

Historique des modifications

VersionDateModification
v1 8 mars 2026 Initial publication
v2 9 mars 2026 Absorbed ircc-immigration-ai-triage-bias hazard — added IRCC entity, Chinook/ADA details, immigration-specific sources (IBA, Heron Law, IRCC AI Strategy), bias evidence, affected populations, June 2026 DADM deadline