Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
Actif Important Confiance: high

Les employeurs canadiens utilisent de plus en plus l'IA pour l'embauche. Une étude de l'Université de Washington a révélé que les systèmes de tri par IA favorisaient les noms associés aux Blancs 85 % du temps et ne favorisaient jamais les noms d'hommes noirs. Le projet de loi 149 de l'Ontario (en vigueur jan. 2026) est la première loi canadienne exigeant la divulgation de l'IA dans les offres d'emploi. La CODP a publié le premier outil d'évaluation d'impact de l'IA basé sur les droits de la personne (nov. 2024).

Identifié: 1 janvier 2024 Dernière évaluation: 10 mars 2026

Canadian employers are increasingly using AI-powered tools for hiring and recruitment — automated resume screening, video interview analysis, candidate matching algorithms, and predictive workforce analytics — with limited transparency about how these systems evaluate candidates and growing evidence that they can produce discriminatory outcomes along protected grounds.

The adoption is substantial and accelerating. Statistics Canada reported that 12.2% of Canadian businesses used AI as of Q2 2025, more than double the rate from the previous year, with human resources and recruitment among the most common applications. LinkedIn's AI-powered hiring tools are used by thousands of Canadian employers. Major Canadian organizations use platforms like Workday, iCIMS, Greenhouse, and HireVue that incorporate AI for candidate screening and ranking. The Canadian government itself uses AI-assisted tools in some hiring processes.

The evidence of bias in AI hiring tools is well-documented internationally and directly relevant to Canadian deployments. Amazon's internal AI recruitment tool, developed to screen resumes, was found to systematically discriminate against women — penalizing resumes that included the word "women's" (as in "women's chess club captain") and downgrading graduates of all-women's colleges. Amazon abandoned the tool by early 2017 after failing to eliminate the bias. Reuters publicly reported the project in October 2018. The root cause — training on historical hiring data that reflected past discriminatory patterns — is present in virtually all AI hiring systems trained on employer data.

Video interview analysis tools raise particular concerns. HireVue and similar platforms assess candidates based on facial expressions, tone of voice, and word choice, generating scores that influence hiring decisions. Research has demonstrated that these tools can discriminate against candidates with disabilities (different facial expressions, speech patterns), candidates of different racial or ethnic backgrounds, and candidates whose first language is not English. HireVue discontinued its facial analysis feature in 2021 following criticism, but other companies continue to offer similar capabilities.

Canadian human rights law prohibits employment discrimination on grounds including race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity, marital status, family status, disability, and genetic characteristics. The Canadian Human Rights Act and provincial human rights legislation apply to hiring processes regardless of whether a human or an algorithm makes the decision. However, the mechanisms for detecting and proving algorithmic discrimination are underdeveloped. An applicant rejected by an AI screening tool typically receives no explanation and has no visibility into the criteria that were applied.

The Canadian Human Rights Commission has recognized the risk. The CHRC has stated that "algorithms that are trained on historical data can perpetuate and amplify existing patterns of discrimination." However, no specific enforcement action has been taken against discriminatory AI hiring practices in Canada.

The structural concern is that AI hiring tools create a high-throughput discrimination machine. When a biased algorithm screens thousands of applications, the number of people affected is far larger than traditional human bias — and the discrimination is invisible because it occurs inside a black box that neither the employer nor the applicant can inspect. Canadian employers may be unknowingly violating human rights law by deploying tools they cannot audit, evaluate, or explain.

Préjudices

Les systèmes de recrutement par IA entraînés sur des données historiques reproduisent et amplifient les schémas de discrimination existants. La recherche documente que les filtres de CV par IA désavantagent les candidats avec des handicaps, des noms non occidentaux et des interruptions de carrière.

Discrimination et droitsImportantPopulation

12,2 % des entreprises canadiennes utilisaient l'IA au T2 2024, les ressources humaines étant parmi les applications les plus courantes. Les candidats soumis au filtrage par IA n'ont généralement aucune visibilité sur les critères d'évaluation et aucun recours significatif.

Discrimination et droitsAutonomie compromiseImportantPopulation

Preuves

10 rapports

  1. Média — Reuters (10 oct. 2018)

    Amazon's AI recruitment tool systematically discriminated against women; abandoned by early 2017, publicly reported October 2018

  2. Officiel — Canadian Human Rights Commission (1 janv. 2020)

    CHRC recognized that AI algorithms trained on historical data can perpetuate and amplify discrimination

  3. Officiel — Ontario Legislative Assembly (21 mars 2024)

    Ontario Bill 149 requires AI disclosure in job postings, effective January 1, 2026 — first in Canada

  4. Académique — University of Washington (1 oct. 2024)

    3M+ comparisons across LLMs: white-associated names favored 85% of the time; female names favored only 11%; Black male names never favored over white male names

  5. Officiel — Statistics Canada (1 nov. 2024)

    12.2% of Canadian businesses used AI as of Q2 2025, more than double from previous year; HR/recruitment among top applications

  6. Officiel — Ontario Human Rights Commission / Law Commission of Ontario (1 nov. 2024)

    First Canadian AI impact assessment tool grounded in human rights law (voluntary)

  7. Officiel — New York City Council (5 juill. 2023)

    NYC requires annual bias audits of AI hiring tools and candidate notice

  8. Officiel — Government of Canada (1 janv. 2024)

    Prohibits employment discrimination on protected grounds regardless of whether decision is by human or algorithm

  9. Officiel — Public Service Commission of Canada (1 janv. 2024)

    PSC guidance on AI in federal hiring processes

  10. Média — Fisher Phillips (1 mai 2025)

    Mobley v. Workday preliminary collective action certification; AI vendor potentially liable as employer 'agent' for discrimination

Détails de la fiche

Recommandations de politiqueévalué

Require bias auditing of AI tools used in hiring and recruitment decisions, modelled on NYC Local Law 144

NYC Council / EU AI Act

Mandate transparency notices to candidates when AI tools are used in hiring evaluation

NYC Local Law 144 / EU AI Act

CHRC to develop enforcement guidance for algorithmic discrimination in employment

Canadian Human Rights Commission

Évaluation éditoriale évalué

Les outils d'IA créent une discrimination à haut débit : un algorithme biaisé triant des milliers de candidatures affecte bien plus de personnes que le biais humain. 12,2 % des entreprises canadiennes utilisent l'IA. La CCDP a reconnu le risque mais sans mesure d'application. NYC et l'UE ont des cadres réglementaires; le Canada n'en a pas.

Entités impliquées

Fiches connexes

Taxonomieévalué

Domaine
EmploiServices publics
Type de préjudice
Discrimination et droitsPréjudice économique
Voie de contribution de l'IA
Origine des données d'entraînementContexte de déploiementSurveillance absente
Phase du cycle de vie
EntraînementDéploiementSurveillance

Historique des modifications

Historique des modifications
VersionDateModification
v110 mars 2026Initial publication

Version 1