IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisions
Since 2018, IRCC has used IBM SPSS Modeler to sort visa applications into three processing tiers based on patterns in historical decisions. Tier assignment substantially affects outcomes — Tier 1 gets near-automatic approval while Tier 2/3 face much higher refusal rates. The system operated exclusively on China and India applications for nearly four years. Over 7 million applications have been assessed. Applicants are not told their tier.
Since April 2018, Immigration, Refugees and Citizenship Canada (IRCC) has used a machine-learning system to triage Temporary Resident Visa (TRV) applications. The system uses IBM SPSS Modeler to generate predictive decision-tree rules from historical immigration decision data, sorting applications into three tiers that determine their processing pathway and materially influence outcomes.
The system has two layers. Layer 1 ("Officer Rules") consists of manually created triage rules developed by IRCC's Beijing visa office using statistical information and historical data. Layer 2 ("Model Rules") is generated by IBM SPSS Modeler, which tests millions of applicant characteristic combinations against historical approval/refusal outcomes to find reliable correlations, then formulates them as decision-tree rules with confidence thresholds.
Applications are sorted into three tiers. Tier 1 applications are classified as "routine" and receive automated eligibility approval with no human review of the eligibility determination — officers only check admissibility (security and criminality). Tier 2 and Tier 3 applications are sent to officers for full review, with Tier 3 carrying the highest refusal rates. The tier designation substantially affects outcomes: Tier 1 applications have near-100% approval rates, while Tier 2 approval rates drop to 63% for online India applications and 37% for India VAC applications. Will Tao, an immigration lawyer who has obtained internal IRCC documents through access-to-information requests, has noted that "Tier 1 Applications are decided with no human in the loop but the computer system will approve them" . IRCC maintains that officers always make the final decision and that the system "never refuses or recommends refusing applications." Tao and other immigration lawyers argue the tier assignment effectively predetermines outcomes even if officers nominally decide.
From April 2018 to January 2022, the system operated exclusively on applications from China and India. This nearly four-year period of nationality-specific ML triage has been identified by researchers and immigration lawyers as the primary discrimination concern. Applicants from these two countries were processed by a machine-learning system trained on historical decisions from those same countries, while applicants from other countries were not subject to algorithmic triage. The model was trained on past officer decisions that may have reflected conscious or unconscious biases. Will Tao's research, based on documents obtained through access-to-information requests, found that historical training guides in Chinese visa offices "assigned character traits and misrepresentation risks based on province of origin." In January 2022, the system was expanded to all overseas TRV applications, and subsequently to Visitor Records and Family Class Spousal applications. IRCC reports that the Advanced Analytics Solutions Centre has assessed more than 7 million applications.
The system was assessed at Level 2 (Moderate) under the Treasury Board's Directive on Automated Decision-Making. Multiple observers have questioned whether this assessment understates the system's impact given its scale and consequences. IRCC published its Algorithmic Impact Assessment on the Open Government Portal in January 2022. A peer review by the National Research Council was conducted in 2018 but was not published until Will Tao obtained it through an ATIP request and published it himself. The Directive's Section 6.3.5 requires peer review publication prior to a system's production; compliance with this requirement has been incomplete.
Applicants are not told which tier they are assigned to. The tier designation is not recorded in GCMS (Global Case Management System) notes. Officers downstream of the triage are reportedly not informed of the rules governing the system. This opacity makes it practically difficult for applicants to challenge a tier assignment they cannot see — though judicial review of the final decision remains available — and officers may not understand what pre-processing shaped the file they are reviewing.
The Canadian Immigration Lawyers Association stated in August 2025 that "the introduction of automated and analytic tools...is directly linked to increase in decisions that are neither meaningful nor well-reasoned." Immigration lawyers have documented patterns of generic refusals, missing document citations for documents that were submitted, and processing timestamps suggesting decisions made in minutes. The AI Monitor for Immigration in Canada and Internationally (AIMICI), founded in October 2025 by Will Tao and three co-founders, was created specifically to investigate and monitor these concerns.
No Federal Court decision has directly addressed the Advanced Analytics triage system. Most litigation has focused on Chinook, a separate data-display tool. In Luk v. Canada (2024 FC 623), the Court held that "the use of algorithms or artificial intelligence to process applications is not in and of itself a breach of procedural fairness." However, in Mehrara v. Canada (2024 FC 1554), Justice Battista noted this "may not be the case in other judicial reviews of applications processed using processing technology, particularly in applications where risk indicators are present" — the first judicial signal that the triage system's impact on high-risk-flagged applications may warrant closer scrutiny.
IRCC describes the system as a triage tool that does not make final decisions — officers retain discretion at every stage, and no application is automatically refused based on tier assignment alone. The department states that the system was designed to improve processing efficiency and reduce wait times. The expansion from two nationalities to global coverage in 2022 addressed the most prominent equity concern about nationality-specific application. The system has been assessed under the federal Directive on Automated Decision-Making, though critics argue the Moderate (Level 2) classification underestimates the system's impact.
Harms
IRCC's ML triage system, trained on historical immigration decisions, sorts applications into risk tiers that materially influence outcomes. Tier assignments are invisible to applicants and officers, with applications flagged as high-risk receiving enhanced scrutiny and dramatically lower approval rates.
The system reproduces nationality-based and demographic biases embedded in historical decisions. Applicants cannot challenge or even know their tier assignment, creating a structural accountability gap in one of Canada's largest algorithmic decision systems.
Evidence
13 reports
-
IRCC official documentation of advanced analytics for TRV processing; describes the system's design and stated purpose
-
Citizen Lab/IHRP report: human rights analysis of automated decision-making in Canadian immigration; documents transparency gaps and rights implications
-
Academic working paper: analysis of machine-learning triage in Canada's TRV system; documents bias risks and procedural fairness concerns
-
Published algorithmic impact assessment for IRCC's triage tool; government's own risk assessment of the system
-
Detailed analysis of IRCC's officer and model rules; documents how Layer 1 and Layer 2 triage interact
-
All final decisions to refuse an application are made by an officer; none of IRCC's automated systems can refuse an application or recommend a refusal
-
Parliamentary committee report on technology and automation in the immigration system; documents political oversight of IRCC's AI use
-
Use of algorithms or AI to process applications is not in itself a breach of procedural fairness
-
Analysis of missing peer reviews in IRCC's published algorithmic impact assessment; documents governance gaps in the assessment process
-
Justice Battista noted this may not be the case for applications where risk indicators are present
-
CBC reporting: immigration lawyers concerned IRCC's processing technology biases against certain nationalities; practitioner perspective on disparate impact
-
Canadian Lawyer Magazine: lack of clarity on how immigration officials use automated tools; documents transparency concerns
-
IRCC's published AI strategy; documents the department's plans for expanded algorithmic decision-making
Record details
Responses & Outcomes
Conducted peer review of the Advanced Analytics triage system
Review completed but not publicly published by IRCC until obtained through ATIP by Will Tao
Directive on Automated Decision-Making came into effect, establishing AIA requirements and impact levels for federal automated systems
System assessed at Level 2 (Moderate); compliance with peer review publication requirements has been incomplete
Published Algorithmic Impact Assessment on Open Government Portal
AIA available publicly; assessed at Level 2 (Moderate); questions raised about whether impact level is understated
CIMM Report 12 recommended independent assessment and oversight of IRCC technology tools including AI expansion
Recommendations published; no independent audit has been conducted as of March 2026
Policy Recommendationsassessed
Conduct an independent bias audit of the Advanced Analytics triage system, testing for nationality, gender, age, regional, and socioeconomic disparities in tier assignment and downstream outcomes
CIMM Report 12; AIMICI; academic researchersRecord tier assignments in GCMS notes so that applicants and reviewing courts can assess whether algorithmic pre-processing influenced the outcome
Will Tao; immigration law practitionersNotify applicants when ML-based triage has been used in the processing of their application, consistent with the Directive on Automated Decision-Making's notice requirements
Treasury Board Directive on Automated Decision-MakingEditorial Assessment assessed
This is one of the largest deployments of machine learning in Canadian government decision-making, processing over 7 million applications. IRCC states that officers retain discretion at every stage and no application is automatically refused based on tier alone. However, tier assignment substantially influences processing pathways and outcomes: Tier 1 applications receive near-automatic approval while Tier 2/3 face higher refusal rates. The system operated exclusively on China and India applications for nearly four years before expanding globally. Tier assignments are not visible to applicants or recorded in case notes, limiting the possibility of external review. Immigration lawyers and civil society organizations have documented concerns about increasingly generic refusals linked to the automation pipeline.
Entities Involved
AI Systems Involved
IBM SPSS Modeler-based ML system that generates predictive decision-tree rules from historical immigration decisions, sorting visa applications into three processing tiers with substantially different approval rates
Related Records
- AI Governance Gap in Canadarelated
- AI in Canadian Government Automated Decision-Makingrelated
- AI in Canadian Government Automated Decision-Makingrelated
- CBSA Machine Learning System Scores All Border Entrants with No Independent Auditrelated
- AI Systems as Attack Surfacesrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 10, 2026 | Record created from public sources including IRCC official disclosures, Open Government Portal AIA, academic research, parliamentary testimony, and immigration law practitioner analysis. Agent-draft — requires editorial review before publication. |