Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
Actif Important Confiance: medium

Les gouvernements canadiens utilisent l'IA dans les décisions d'immigration, d'impôts, de prestations et de protection de la jeunesse — mais le cadre de gouvernance est appliqué de manière incohérente et ne couvre pas les déploiements provinciaux.

Identifié: 1 janvier 2018 Dernière évaluation: 8 mars 2026

Canadian federal and provincial government agencies are deploying AI and algorithmic tools in decisions about immigration, taxes, benefits, child welfare, and law enforcement — with inconsistent governance, limited transparency, and inadequate recourse for affected individuals.

The federal Directive on Automated Decision-Making (DADM), issued by the Treasury Board in 2019, provides a governance framework: it requires algorithmic impact assessments for automated decisions that affect the rights or interests of Canadians, establishes transparency requirements, and mandates human review mechanisms. However, compliance has been inconsistent. A 2022 Citizen Lab study documented gaps in DADM implementation across federal institutions — departments deploying AI tools without completing impact assessments, or categorizing systems in ways that minimized governance requirements.

Immigration, Refugees and Citizenship Canada (IRCC) has used data analytics to triage immigration applications since 2013, beginning with temporary resident visa backlogs, with machine learning-based triage formally deployed from 2017-2018. By 2024, IRCC's advanced analytics tools — including the "Chinook" case processing system and the "Automated Decision Assistant" — were processing millions of applications annually, sorting them into risk tiers that determine processing speed and scrutiny level. IRCC states that AI does not refuse applications, but the risk tiers materially shape outcomes: low-risk applications may receive streamlined processing (in some categories, positive decisions may be generated without officer review), while high-risk files receive additional scrutiny. Officers are not told how the tiering system works. Applicants have no way to know whether AI triage affected their case. The core concern is training data: if historical decisions contain patterns of refusal correlated with nationality, gender, age, or marital status, the AI reproduces those patterns at scale. Immigration lawyers have reported anecdotal patterns suggesting gender-based bias, including cases where single women were refused with reasons noting they were "young, single, and mobile."

Provincial and municipal government deployments have no equivalent framework. Quebec's Direction de la protection de la jeunesse used a risk assessment tool (SSP) that contributed to a child's death — a provincial deployment with no algorithmic impact assessment requirement. The CRA deployed an $18 million AI chatbot that the Auditor General found answered only 2 of 6 test questions correctly, while processing 18 million taxpayer queries.

Government AI deployment continues to outpace governance capacity. AI now shapes decisions about who gets a visa, who gets benefits, and which children are flagged as at risk — areas where transparency, assessment, and recourse are established governance expectations.

Préjudices

Le système de triage par apprentissage automatique d'IRCC a traité plus de 7 millions de demandes de visa depuis 2018, triant les demandeurs en niveaux de risque qui influencent matériellement les résultats de traitement. Les affectations de niveau sont invisibles pour les demandeurs et les agents, avec un recours limité contre les déterminations algorithmiques.

Discrimination et droitsAutonomie compromiseImportantPopulation

Le Citizen Lab a documenté des lacunes dans la mise en œuvre de la DPDA dans les institutions fédérales — des ministères déployant des outils d'IA sans compléter les évaluations d'impact, ou catégorisant les systèmes de manière à minimiser les exigences de gouvernance.

Autonomie compromiseModéréSecteur

L'outil d'évaluation des risques en protection de la jeunesse du DPJ au Québec a contribué au décès d'un enfant, illustrant les conséquences du déploiement d'outils algorithmiques dans des contextes de sécurité des personnes sans surveillance adéquate.

Incident de sécuritéCritiqueIndividuel

Preuves

8 rapports

  1. Officiel — Treasury Board of Canada Secretariat (1 avr. 2023)

    Federal governance framework for automated decision-making; existing systems must comply by June 2026

  2. Académique — International Bar Association (1 janv. 2024)

    Comprehensive analysis of IRCC AI use, bias risks, anecdotal reports of gender-based refusal patterns

  3. Officiel — Immigration, Refugees and Citizenship Canada (1 janv. 2024)

    IRCC's official description of AI use in immigration processing

  4. Réglementaire — Office of the Auditor General of Canada (19 mars 2024)

    CRA chatbot accuracy failures documented by Auditor General

  5. Média — Heron Law Offices (1 juin 2024)

    Details of IRCC's AI triage system, officer opacity, and positive decisions without officer review

  6. Académique — Citizen Lab (University of Toronto) (1 oct. 2022)

    Documentation of DADM compliance gaps and AI deployment patterns across federal government

  7. Média — Chaudhary Law Office (1 juin 2024)

    Anecdotal reports of single women refused with 'young, single, and mobile' reasoning

  8. Média — Green and Spiegel LLP (27 mai 2025)

    Legal analysis of AI use in Canadian immigration processing; context on algorithmic decision-making in immigration

Détails de la fiche

Réponses et résultats

Secrétariat du Conseil du Trésor du CanadalegislationActif

A émis la Directive sur la prise de décisions automatisée établissant des exigences d'évaluation d'impact algorithmique pour les institutions fédérales

Immigration, Réfugiés et Citoyenneté Canadainstitutional actionActif

A publié la Stratégie d'intelligence artificielle décrivant l'utilisation de l'IA dans le traitement de l'immigration et s'engageant envers des principes d'IA responsable

Recommandations de politiqueévalué

Consistent enforcement of the federal Directive on Automated Decision-Making

Citizen Lab, University of Toronto (1 oct. 2022)

Provincial equivalents to the DADM for provincial and municipal AI deployments

Citizen Lab, University of Toronto (1 oct. 2022)

Mandatory algorithmic impact assessment before deploying AI in consequential government decisions

Treasury Board of Canada Secretariat (1 avr. 2023)

Independent bias audit of IRCC's AI triage systems for demographic bias, with results published

International Bar Association (1 janv. 2024)

Require IRCC to disclose to applicants when AI triage was used and provide meaningful explanation of risk tier assigned

International Bar Association (1 janv. 2024)

Require IRCC to demonstrate that AI triage systems do not reproduce historical patterns of discriminatory refusal

International Bar Association (1 janv. 2024)

Évaluation éditoriale évalué

Les agences gouvernementales canadiennes déploient l'IA dans les décisions sur l'immigration, les impôts, les prestations et la protection de la jeunesse — mais le cadre de gouvernance (la Directive) est appliqué de manière incohérente, ne s'applique qu'aux institutions fédérales et ne couvre pas les déploiements provinciaux ou municipaux. Le système de tri par IA d'IRCC affecte des millions de demandes annuellement, provenant de personnes sans statut juridique canadien pour contester le processus.

Entités impliquées

Systèmes d'IA impliqués

CRA AI Chatbot

CRA's AI chatbot (Charlie) processed 18 million taxpayer queries; the Auditor General found it answered only 2 of 6 test questions correctly

Fiches connexes

Taxonomieévalué

Domaine
Services publicsImmigrationServices sociaux
Type de préjudice
Discrimination et droitsInterruption de serviceVie privée et données
Voie de contribution de l'IA
Contexte de déploiementSupervision absenteSurveillance absenteOrigine des données d'entraînement
Phase du cycle de vie
EntraînementDéploiementSurveillanceApprovisionnement

Historique des modifications

Historique des modifications
VersionDateModification
v18 mars 2026Initial publication
v29 mars 2026Absorbed ircc-immigration-ai-triage-bias hazard — added IRCC entity, Chinook/ADA details, immigration-specific sources (IBA, Heron Law, IRCC AI Strategy), bias evidence, affected populations, June 2026 DADM deadline
v210 mars 2026Added cross-reference to hazard/ircc-algorithmic-visa-triage. Deduplicated inline IRCC narrative — full case detail is now in the dedicated IRCC hazard record. Added link to IRCC hazard.

Version 2