Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Active Significant Confidence: high

Since 2018, IRCC has used IBM SPSS Modeler to sort visa applications into three processing tiers based on patterns in historical decisions. Tier assignment substantially affects outcomes — Tier 1 gets near-automatic approval while Tier 2/3 face much higher refusal rates. The system operated exclusively on China and India applications for nearly four years. Over 7 million applications have been assessed. Applicants are not told their tier.

Identified: April 1, 2018 Last assessed: March 10, 2026

Since April 2018, Immigration, Refugees and Citizenship Canada (IRCC) has used a machine-learning system to triage Temporary Resident Visa (TRV) applications. The system uses IBM SPSS Modeler to generate predictive decision-tree rules from historical immigration decision data, sorting applications into three tiers that determine their processing pathway and materially influence outcomes.

The system has two layers. Layer 1 ("Officer Rules") consists of manually created triage rules developed by IRCC's Beijing visa office using statistical information and historical data. Layer 2 ("Model Rules") is generated by IBM SPSS Modeler, which tests millions of applicant characteristic combinations against historical approval/refusal outcomes to find reliable correlations, then formulates them as decision-tree rules with confidence thresholds.

Applications are sorted into three tiers. Tier 1 applications are classified as "routine" and receive automated eligibility approval with no human review of the eligibility determination — officers only check admissibility (security and criminality). Tier 2 and Tier 3 applications are sent to officers for full review, with Tier 3 carrying the highest refusal rates. The tier designation substantially affects outcomes: Tier 1 applications have near-100% approval rates, while Tier 2 approval rates drop to 63% for online India applications and 37% for India VAC applications. Will Tao, an immigration lawyer who has obtained internal IRCC documents through access-to-information requests, has noted that "Tier 1 Applications are decided with no human in the loop but the computer system will approve them" . IRCC maintains that officers always make the final decision and that the system "never refuses or recommends refusing applications." Tao and other immigration lawyers argue the tier assignment effectively predetermines outcomes even if officers nominally decide.

From April 2018 to January 2022, the system operated exclusively on applications from China and India. This nearly four-year period of nationality-specific ML triage has been identified by researchers and immigration lawyers as the primary discrimination concern. Applicants from these two countries were processed by a machine-learning system trained on historical decisions from those same countries, while applicants from other countries were not subject to algorithmic triage. The model was trained on past officer decisions that may have reflected conscious or unconscious biases. Will Tao's research, based on documents obtained through access-to-information requests, found that historical training guides in Chinese visa offices "assigned character traits and misrepresentation risks based on province of origin." In January 2022, the system was expanded to all overseas TRV applications, and subsequently to Visitor Records and Family Class Spousal applications. IRCC reports that the Advanced Analytics Solutions Centre has assessed more than 7 million applications.

The system was assessed at Level 2 (Moderate) under the Treasury Board's Directive on Automated Decision-Making. Multiple observers have questioned whether this assessment understates the system's impact given its scale and consequences. IRCC published its Algorithmic Impact Assessment on the Open Government Portal in January 2022. A peer review by the National Research Council was conducted in 2018 but was not published until Will Tao obtained it through an ATIP request and published it himself. The Directive's Section 6.3.5 requires peer review publication prior to a system's production; compliance with this requirement has been incomplete.

Applicants are not told which tier they are assigned to. The tier designation is not recorded in GCMS (Global Case Management System) notes. Officers downstream of the triage are reportedly not informed of the rules governing the system. This opacity makes it practically difficult for applicants to challenge a tier assignment they cannot see — though judicial review of the final decision remains available — and officers may not understand what pre-processing shaped the file they are reviewing.

The Canadian Immigration Lawyers Association stated in August 2025 that "the introduction of automated and analytic tools...is directly linked to increase in decisions that are neither meaningful nor well-reasoned." Immigration lawyers have documented patterns of generic refusals, missing document citations for documents that were submitted, and processing timestamps suggesting decisions made in minutes. The AI Monitor for Immigration in Canada and Internationally (AIMICI), founded in October 2025 by Will Tao and three co-founders, was created specifically to investigate and monitor these concerns.

No Federal Court decision has directly addressed the Advanced Analytics triage system. Most litigation has focused on Chinook, a separate data-display tool. In Luk v. Canada (2024 FC 623), the Court held that "the use of algorithms or artificial intelligence to process applications is not in and of itself a breach of procedural fairness." However, in Mehrara v. Canada (2024 FC 1554), Justice Battista noted this "may not be the case in other judicial reviews of applications processed using processing technology, particularly in applications where risk indicators are present" — the first judicial signal that the triage system's impact on high-risk-flagged applications may warrant closer scrutiny.

IRCC describes the system as a triage tool that does not make final decisions — officers retain discretion at every stage, and no application is automatically refused based on tier assignment alone. The department states that the system was designed to improve processing efficiency and reduce wait times. The expansion from two nationalities to global coverage in 2022 addressed the most prominent equity concern about nationality-specific application. The system has been assessed under the federal Directive on Automated Decision-Making, though critics argue the Moderate (Level 2) classification underestimates the system's impact.

Harms

IRCC's ML triage system, trained on historical immigration decisions, sorts applications into risk tiers that materially influence outcomes. Tier assignments are invisible to applicants and officers, with applications flagged as high-risk receiving enhanced scrutiny and dramatically lower approval rates.

Discrimination & RightsAutonomy UnderminedSignificantPopulation

The system reproduces nationality-based and demographic biases embedded in historical decisions. Applicants cannot challenge or even know their tier assignment, creating a structural accountability gap in one of Canada's largest algorithmic decision systems.

Discrimination & RightsSignificantPopulation

Evidence

13 reports

  1. Official — Immigration, Refugees and Citizenship Canada (May 12, 2022)

    IRCC official documentation of advanced analytics for TRV processing; describes the system's design and stated purpose

  2. Academic — University of Toronto International Human Rights Program + Citizen Lab (Sep 1, 2018)

    Citizen Lab/IHRP report: human rights analysis of automated decision-making in Canadian immigration; documents transparency gaps and rights implications

  3. Academic — Toronto Metropolitan University — TMCIS Working Paper (Jan 1, 2021)

    Academic working paper: analysis of machine-learning triage in Canada's TRV system; documents bias risks and procedural fairness concerns

  4. Official — Government of Canada — Open Government Portal (Jan 21, 2022)

    Published algorithmic impact assessment for IRCC's triage tool; government's own risk assessment of the system

  5. Media — Vancouver Immigration Blog (Will Tao) (Mar 29, 2022)

    Detailed analysis of IRCC's officer and model rules; documents how Layer 1 and Layer 2 triage interact

  6. Official — Immigration, Refugees and Citizenship Canada (Nov 29, 2022)

    All final decisions to refuse an application are made by an officer; none of IRCC's automated systems can refuse an application or recommend a refusal

  7. Official — House of Commons Standing Committee on Citizenship and Immigration (Jun 1, 2023)

    Parliamentary committee report on technology and automation in the immigration system; documents political oversight of IRCC's AI use

  8. Official — Federal Court of Canada (Apr 22, 2024)

    Use of algorithms or AI to process applications is not in itself a breach of procedural fairness

  9. Media — Vancouver Immigration Blog (Will Tao) (May 1, 2024)

    Analysis of missing peer reviews in IRCC's published algorithmic impact assessment; documents governance gaps in the assessment process

  10. Other — Canadian Immigration Lawyers Association (Oct 1, 2024)

    Justice Battista noted this may not be the case for applications where risk indicators are present

  11. Media — CBC News (Aug 15, 2025)

    CBC reporting: immigration lawyers concerned IRCC's processing technology biases against certain nationalities; practitioner perspective on disparate impact

  12. Media — Canadian Lawyer Magazine (Oct 1, 2025)

    Canadian Lawyer Magazine: lack of clarity on how immigration officials use automated tools; documents transparency concerns

  13. Official — Immigration, Refugees and Citizenship Canada (Mar 4, 2026)

    IRCC's published AI strategy; documents the department's plans for expanded algorithmic decision-making

Record details

Responses & Outcomes

National Research Council Canadainstitutional actionCompletedUnknown

Conducted peer review of the Advanced Analytics triage system

Review completed but not publicly published by IRCC until obtained through ATIP by Will Tao

Treasury Board of Canada SecretariatguidanceCompletedUnknown

Directive on Automated Decision-Making came into effect, establishing AIA requirements and impact levels for federal automated systems

System assessed at Level 2 (Moderate); compliance with peer review publication requirements has been incomplete

Immigration, Refugees and Citizenship Canadainstitutional actionCompletedUnknown

Published Algorithmic Impact Assessment on Open Government Portal

AIA available publicly; assessed at Level 2 (Moderate); questions raised about whether impact level is understated

House of Commons Standing Committee on Citizenship and Immigrationinstitutional actionCompletedUnknown

CIMM Report 12 recommended independent assessment and oversight of IRCC technology tools including AI expansion

Recommendations published; no independent audit has been conducted as of March 2026

Policy Recommendationsassessed

Conduct an independent bias audit of the Advanced Analytics triage system, testing for nationality, gender, age, regional, and socioeconomic disparities in tier assignment and downstream outcomes

CIMM Report 12; AIMICI; academic researchers

Record tier assignments in GCMS notes so that applicants and reviewing courts can assess whether algorithmic pre-processing influenced the outcome

Will Tao; immigration law practitioners

Notify applicants when ML-based triage has been used in the processing of their application, consistent with the Directive on Automated Decision-Making's notice requirements

Treasury Board Directive on Automated Decision-Making

Editorial Assessment assessed

This is one of the largest deployments of machine learning in Canadian government decision-making, processing over 7 million applications. IRCC states that officers retain discretion at every stage and no application is automatically refused based on tier alone. However, tier assignment substantially influences processing pathways and outcomes: Tier 1 applications receive near-automatic approval while Tier 2/3 face higher refusal rates. The system operated exclusively on China and India applications for nearly four years before expanding globally. Tier assignments are not visible to applicants or recorded in case notes, limiting the possibility of external review. Immigration lawyers and civil society organizations have documented concerns about increasingly generic refusals linked to the automation pipeline.

Entities Involved

AI Systems Involved

IRCC Advanced Analytics Triage System

IBM SPSS Modeler-based ML system that generates predictive decision-tree rules from historical immigration decisions, sorting visa applications into three processing tiers with substantially different approval rates

Related Records

Taxonomyassessed

Domain
Public ServicesImmigration
Harm type
Discrimination & RightsPrivacy & Data Exposure
AI pathway
Deployment ContextOversight AbsentMonitoring Absent
Lifecycle phase
DeploymentMonitoringTraining

Changelog

Changelog
VersionDateChange
v1Mar 10, 2026Record created from public sources including IRCC official disclosures, Open Government Portal AIA, academic research, parliamentary testimony, and immigration law practitioner analysis. Agent-draft — requires editorial review before publication.

Version 1