Systèmes d'IA de recrutement et d'embauche produisant des résultats discriminatoires
Les employeurs canadiens utilisent de plus en plus l'IA pour l'embauche. Une étude de l'Université de Washington a révélé que les systèmes de tri par IA favorisaient les noms associés aux Blancs 85 % du temps et ne favorisaient jamais les noms d'hommes noirs. Le projet de loi 149 de l'Ontario (en vigueur jan. 2026) est la première loi canadienne exigeant la divulgation de l'IA dans les offres d'emploi. La CODP a publié le premier outil d'évaluation d'impact de l'IA basé sur les droits de la personne (nov. 2024).
Canadian employers are increasingly using AI-powered tools for hiring and recruitment — automated resume screening, video interview analysis, candidate matching algorithms, and predictive workforce analytics — with limited transparency about how these systems evaluate candidates and growing evidence that they can produce discriminatory outcomes along protected grounds.
The adoption is substantial and accelerating. Statistics Canada reported that 12.2% of Canadian businesses used AI as of Q2 2025, more than double the rate from the previous year, with human resources and recruitment among the most common applications. LinkedIn's AI-powered hiring tools are used by thousands of Canadian employers. Major Canadian organizations use platforms like Workday, iCIMS, Greenhouse, and HireVue that incorporate AI for candidate screening and ranking. The Canadian government itself uses AI-assisted tools in some hiring processes.
The evidence of bias in AI hiring tools is well-documented internationally and directly relevant to Canadian deployments. Amazon's internal AI recruitment tool, developed to screen resumes, was found to systematically discriminate against women — penalizing resumes that included the word "women's" (as in "women's chess club captain") and downgrading graduates of all-women's colleges. Amazon abandoned the tool by early 2017 after failing to eliminate the bias. Reuters publicly reported the project in October 2018. The root cause — training on historical hiring data that reflected past discriminatory patterns — is present in virtually all AI hiring systems trained on employer data.
Video interview analysis tools raise particular concerns. HireVue and similar platforms assess candidates based on facial expressions, tone of voice, and word choice, generating scores that influence hiring decisions. Research has demonstrated that these tools can discriminate against candidates with disabilities (different facial expressions, speech patterns), candidates of different racial or ethnic backgrounds, and candidates whose first language is not English. HireVue discontinued its facial analysis feature in 2021 following criticism, but other companies continue to offer similar capabilities.
Canadian human rights law prohibits employment discrimination on grounds including race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity, marital status, family status, disability, and genetic characteristics. The Canadian Human Rights Act and provincial human rights legislation apply to hiring processes regardless of whether a human or an algorithm makes the decision. However, the mechanisms for detecting and proving algorithmic discrimination are underdeveloped. An applicant rejected by an AI screening tool typically receives no explanation and has no visibility into the criteria that were applied.
The Canadian Human Rights Commission has recognized the risk. The CHRC has stated that "algorithms that are trained on historical data can perpetuate and amplify existing patterns of discrimination." However, no specific enforcement action has been taken against discriminatory AI hiring practices in Canada.
The structural concern is that AI hiring tools create a high-throughput discrimination machine. When a biased algorithm screens thousands of applications, the number of people affected is far larger than traditional human bias — and the discrimination is invisible because it occurs inside a black box that neither the employer nor the applicant can inspect. Canadian employers may be unknowingly violating human rights law by deploying tools they cannot audit, evaluate, or explain.
Préjudices
Les systèmes de recrutement par IA entraînés sur des données historiques reproduisent et amplifient les schémas de discrimination existants. La recherche documente que les filtres de CV par IA désavantagent les candidats avec des handicaps, des noms non occidentaux et des interruptions de carrière.
12,2 % des entreprises canadiennes utilisaient l'IA au T2 2024, les ressources humaines étant parmi les applications les plus courantes. Les candidats soumis au filtrage par IA n'ont généralement aucune visibilité sur les critères d'évaluation et aucun recours significatif.
Preuves
10 rapports
- Amazon scraps secret AI recruiting tool that showed bias against women Source principale
Amazon's AI recruitment tool systematically discriminated against women; abandoned by early 2017, publicly reported October 2018
- Artificial Intelligence and Human Rights Source principale
CHRC recognized that AI algorithms trained on historical data can perpetuate and amplify discrimination
- Bill 149: Working for Workers Four Act, 2024 Source principale
Ontario Bill 149 requires AI disclosure in job postings, effective January 1, 2026 — first in Canada
- Gender, Race, and Intersectional Bias in Resume Screening via Language Model Retrievers Source principale
3M+ comparisons across LLMs: white-associated names favored 85% of the time; female names favored only 11%; Black male names never favored over white male names
- Use of Artificial Intelligence by Canadian Businesses Source principale
12.2% of Canadian businesses used AI as of Q2 2025, more than double from previous year; HR/recruitment among top applications
- Human Rights AI Impact Assessment Source principale
First Canadian AI impact assessment tool grounded in human rights law (voluntary)
-
NYC requires annual bias audits of AI hiring tools and candidate notice
-
Prohibits employment discrimination on protected grounds regardless of whether decision is by human or algorithm
-
PSC guidance on AI in federal hiring processes
-
Mobley v. Workday preliminary collective action certification; AI vendor potentially liable as employer 'agent' for discrimination
Détails de la fiche
Recommandations de politiqueévalué
Require bias auditing of AI tools used in hiring and recruitment decisions, modelled on NYC Local Law 144
NYC Council / EU AI ActMandate transparency notices to candidates when AI tools are used in hiring evaluation
NYC Local Law 144 / EU AI ActCHRC to develop enforcement guidance for algorithmic discrimination in employment
Canadian Human Rights CommissionÉvaluation éditoriale évalué
Les outils d'IA créent une discrimination à haut débit : un algorithme biaisé triant des milliers de candidatures affecte bien plus de personnes que le biais humain. 12,2 % des entreprises canadiennes utilisent l'IA. La CCDP a reconnu le risque mais sans mesure d'application. NYC et l'UE ont des cadres réglementaires; le Canada n'en a pas.
Entités impliquées
Fiches connexes
- Large Language Models Systematically Recommend Lower Salaries for Women, Minorities, and Refugees in Negotiation Advicerelated
- AI Performance Disparities Affecting Canadian Linguistic and Cultural Communitiesrelated
- AI in Canadian Government Automated Decision-Makingrelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 10 mars 2026 | Initial publication |