Le triage par apprentissage automatique d'IRCC classe des millions de demandes de visa à l'aide de modèles entraînés sur des décisions historiques
Depuis 2018, IRCC utilise IBM SPSS Modeler pour classer les demandes de visa en trois niveaux selon des modèles de décisions historiques. Le classement affecte substantiellement les résultats — le niveau 1 obtient une approbation quasi automatique tandis que les niveaux 2/3 subissent des taux de refus beaucoup plus élevés. Le système fonctionnait exclusivement pour les demandes de la Chine et de l'Inde pendant près de quatre ans. Plus de 7 millions de demandes ont été évaluées.
Since April 2018, Immigration, Refugees and Citizenship Canada (IRCC) has used a machine-learning system to triage Temporary Resident Visa (TRV) applications. The system uses IBM SPSS Modeler to generate predictive decision-tree rules from historical immigration decision data, sorting applications into three tiers that determine their processing pathway and materially influence outcomes.
The system has two layers. Layer 1 ("Officer Rules") consists of manually created triage rules developed by IRCC's Beijing visa office using statistical information and historical data. Layer 2 ("Model Rules") is generated by IBM SPSS Modeler, which tests millions of applicant characteristic combinations against historical approval/refusal outcomes to find reliable correlations, then formulates them as decision-tree rules with confidence thresholds.
Applications are sorted into three tiers. Tier 1 applications are classified as "routine" and receive automated eligibility approval with no human review of the eligibility determination — officers only check admissibility (security and criminality). Tier 2 and Tier 3 applications are sent to officers for full review, with Tier 3 carrying the highest refusal rates. The tier designation substantially affects outcomes: Tier 1 applications have near-100% approval rates, while Tier 2 approval rates drop to 63% for online India applications and 37% for India VAC applications. Will Tao, an immigration lawyer who has obtained internal IRCC documents through access-to-information requests, has noted that "Tier 1 Applications are decided with no human in the loop but the computer system will approve them" . IRCC maintains that officers always make the final decision and that the system "never refuses or recommends refusing applications." Tao and other immigration lawyers argue the tier assignment effectively predetermines outcomes even if officers nominally decide.
From April 2018 to January 2022, the system operated exclusively on applications from China and India. This nearly four-year period of nationality-specific ML triage has been identified by researchers and immigration lawyers as the primary discrimination concern. Applicants from these two countries were processed by a machine-learning system trained on historical decisions from those same countries, while applicants from other countries were not subject to algorithmic triage. The model was trained on past officer decisions that may have reflected conscious or unconscious biases. Will Tao's research, based on documents obtained through access-to-information requests, found that historical training guides in Chinese visa offices "assigned character traits and misrepresentation risks based on province of origin." In January 2022, the system was expanded to all overseas TRV applications, and subsequently to Visitor Records and Family Class Spousal applications. IRCC reports that the Advanced Analytics Solutions Centre has assessed more than 7 million applications.
The system was assessed at Level 2 (Moderate) under the Treasury Board's Directive on Automated Decision-Making. Multiple observers have questioned whether this assessment understates the system's impact given its scale and consequences. IRCC published its Algorithmic Impact Assessment on the Open Government Portal in January 2022. A peer review by the National Research Council was conducted in 2018 but was not published until Will Tao obtained it through an ATIP request and published it himself. The Directive's Section 6.3.5 requires peer review publication prior to a system's production; compliance with this requirement has been incomplete.
Applicants are not told which tier they are assigned to. The tier designation is not recorded in GCMS (Global Case Management System) notes. Officers downstream of the triage are reportedly not informed of the rules governing the system. This opacity makes it practically difficult for applicants to challenge a tier assignment they cannot see — though judicial review of the final decision remains available — and officers may not understand what pre-processing shaped the file they are reviewing.
The Canadian Immigration Lawyers Association stated in August 2025 that "the introduction of automated and analytic tools...is directly linked to increase in decisions that are neither meaningful nor well-reasoned." Immigration lawyers have documented patterns of generic refusals, missing document citations for documents that were submitted, and processing timestamps suggesting decisions made in minutes. The AI Monitor for Immigration in Canada and Internationally (AIMICI), founded in October 2025 by Will Tao and three co-founders, was created specifically to investigate and monitor these concerns.
No Federal Court decision has directly addressed the Advanced Analytics triage system. Most litigation has focused on Chinook, a separate data-display tool. In Luk v. Canada (2024 FC 623), the Court held that "the use of algorithms or artificial intelligence to process applications is not in and of itself a breach of procedural fairness." However, in Mehrara v. Canada (2024 FC 1554), Justice Battista noted this "may not be the case in other judicial reviews of applications processed using processing technology, particularly in applications where risk indicators are present" — the first judicial signal that the triage system's impact on high-risk-flagged applications may warrant closer scrutiny.
IRCC describes the system as a triage tool that does not make final decisions — officers retain discretion at every stage, and no application is automatically refused based on tier assignment alone. The department states that the system was designed to improve processing efficiency and reduce wait times. The expansion from two nationalities to global coverage in 2022 addressed the most prominent equity concern about nationality-specific application. The system has been assessed under the federal Directive on Automated Decision-Making, though critics argue the Moderate (Level 2) classification underestimates the system's impact.
Préjudices
Le système de triage par apprentissage automatique d'IRCC, entraîné sur des décisions d'immigration historiques, trie les demandes en niveaux de risque qui influencent matériellement les résultats. Les affectations de niveau sont invisibles pour les demandeurs et les agents, les demandes signalées à haut risque recevant un examen accru et des taux d'approbation considérablement plus bas.
Le système reproduit les biais basés sur la nationalité et la démographie intégrés dans les décisions historiques. Les demandeurs ne peuvent pas contester ni même connaître leur affectation de niveau, créant une lacune structurelle de responsabilité dans l'un des plus grands systèmes de décision algorithmique du Canada.
Preuves
13 rapports
- Advanced Analytics for Processing Temporary Resident Visa Applications Source principale
IRCC official documentation of advanced analytics for TRV processing; describes the system's design and stated purpose
-
Citizen Lab/IHRP report: human rights analysis of automated decision-making in Canadian immigration; documents transparency gaps and rights implications
-
Academic working paper: analysis of machine-learning triage in Canada's TRV system; documents bias risks and procedural fairness concerns
-
Published algorithmic impact assessment for IRCC's triage tool; government's own risk assessment of the system
-
Detailed analysis of IRCC's officer and model rules; documents how Layer 1 and Layer 2 triage interact
-
All final decisions to refuse an application are made by an officer; none of IRCC's automated systems can refuse an application or recommend a refusal
-
Parliamentary committee report on technology and automation in the immigration system; documents political oversight of IRCC's AI use
-
Use of algorithms or AI to process applications is not in itself a breach of procedural fairness
-
Analysis of missing peer reviews in IRCC's published algorithmic impact assessment; documents governance gaps in the assessment process
-
Justice Battista noted this may not be the case for applications where risk indicators are present
-
CBC reporting: immigration lawyers concerned IRCC's processing technology biases against certain nationalities; practitioner perspective on disparate impact
-
Canadian Lawyer Magazine: lack of clarity on how immigration officials use automated tools; documents transparency concerns
-
IRCC's published AI strategy; documents the department's plans for expanded algorithmic decision-making
Détails de la fiche
Réponses et résultats
Conducted peer review of the Advanced Analytics triage system
Review completed but not publicly published by IRCC until obtained through ATIP by Will Tao
Directive on Automated Decision-Making came into effect, establishing AIA requirements and impact levels for federal automated systems
System assessed at Level 2 (Moderate); compliance with peer review publication requirements has been incomplete
Published Algorithmic Impact Assessment on Open Government Portal
AIA available publicly; assessed at Level 2 (Moderate); questions raised about whether impact level is understated
CIMM Report 12 recommended independent assessment and oversight of IRCC technology tools including AI expansion
Recommendations published; no independent audit has been conducted as of March 2026
Recommandations de politiqueévalué
Mener un audit indépendant des biais du système de triage par analytique avancée, testant les disparités de nationalité, de genre, d'âge, régionales et socioéconomiques dans l'attribution des niveaux et les résultats en aval
CIMM Report 12; AIMICI; academic researchersEnregistrer les classements de niveau dans les notes du SMGC afin que les demandeurs et les tribunaux puissent évaluer si le prétraitement algorithmique a influencé le résultat
Will Tao; immigration law practitionersAviser les demandeurs lorsque le triage par apprentissage automatique a été utilisé dans le traitement de leur demande, conformément aux exigences de notification de la Directive sur la prise de décisions automatisée
Treasury Board Directive on Automated Decision-MakingÉvaluation éditoriale évalué
Il s'agit de l'un des plus grands déploiements d'apprentissage automatique dans la prise de décision gouvernementale canadienne. Le système traite des millions de demandes qui déterminent si des personnes peuvent entrer au Canada. La période de près de quatre ans de triage spécifique à la nationalité (Chine et Inde seulement) soulève de sérieuses questions sur la discrimination algorithmique dans un système fédéral. L'opacité des classements — invisibles tant pour les demandeurs que pour les agents — élimine la possibilité de contestation significative.
Entités impliquées
Systèmes d'IA impliqués
Système basé sur IBM SPSS Modeler qui génère des règles prédictives d'arbre décisionnel à partir de décisions historiques d'immigration, classant les demandes de visa en trois niveaux de traitement avec des taux d'approbation substantiellement différents
Fiches connexes
- AI Governance Gap in Canadarelated
- AI in Canadian Government Automated Decision-Makingrelated
- AI in Canadian Government Automated Decision-Makingrelated
- CBSA Machine Learning System Scores All Border Entrants with No Independent Auditrelated
- AI Systems as Attack Surfacesrelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 10 mars 2026 | Record created from public sources including IRCC official disclosures, Open Government Portal AIA, academic research, parliamentary testimony, and immigration law practitioner analysis. Agent-draft — requires editorial review before publication. |