Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Active Significant Confidence: high

Canadian employers increasingly use AI for hiring — automated resume screening, video interview analysis, candidate matching — with 12.2% of businesses using AI as of Q2 2025. A UW study found LLM resume screeners favored white-associated names 85% of the time and never favored Black male names. Ontario Bill 149 (effective Jan 2026) is the first Canadian law requiring AI disclosure in job postings. The OHRC released Canada's first human rights AI impact assessment tool (Nov 2024). No Canadian jurisdiction requires bias auditing of AI hiring tools.

Identified: January 1, 2024 Last assessed: March 10, 2026

Canadian employers are increasingly using AI-powered tools for hiring and recruitment — automated resume screening, video interview analysis, candidate matching algorithms, and predictive workforce analytics — with limited transparency about how these systems evaluate candidates and growing evidence that they can produce discriminatory outcomes along protected grounds.

The adoption is substantial and accelerating. Statistics Canada reported that 12.2% of Canadian businesses used AI as of Q2 2025, more than double the rate from the previous year, with human resources and recruitment among the most common applications. LinkedIn's AI-powered hiring tools are used by thousands of Canadian employers. Major Canadian organizations use platforms like Workday, iCIMS, Greenhouse, and HireVue that incorporate AI for candidate screening and ranking. The Canadian government itself uses AI-assisted tools in some hiring processes.

The evidence of bias in AI hiring tools is well-documented internationally and directly relevant to Canadian deployments. Amazon's internal AI recruitment tool, developed to screen resumes, was found to systematically discriminate against women — penalizing resumes that included the word "women's" (as in "women's chess club captain") and downgrading graduates of all-women's colleges. Amazon abandoned the tool by early 2017 after failing to eliminate the bias. Reuters publicly reported the project in October 2018. The root cause — training on historical hiring data that reflected past discriminatory patterns — is present in virtually all AI hiring systems trained on employer data.

Video interview analysis tools raise particular concerns. HireVue and similar platforms assess candidates based on facial expressions, tone of voice, and word choice, generating scores that influence hiring decisions. Research has demonstrated that these tools can discriminate against candidates with disabilities (different facial expressions, speech patterns), candidates of different racial or ethnic backgrounds, and candidates whose first language is not English. HireVue discontinued its facial analysis feature in 2021 following criticism, but other companies continue to offer similar capabilities.

Canadian human rights law prohibits employment discrimination on grounds including race, national or ethnic origin, colour, religion, age, sex, sexual orientation, gender identity, marital status, family status, disability, and genetic characteristics. The Canadian Human Rights Act and provincial human rights legislation apply to hiring processes regardless of whether a human or an algorithm makes the decision. However, the mechanisms for detecting and proving algorithmic discrimination are underdeveloped. An applicant rejected by an AI screening tool typically receives no explanation and has no visibility into the criteria that were applied.

The Canadian Human Rights Commission has recognized the risk. The CHRC has stated that "algorithms that are trained on historical data can perpetuate and amplify existing patterns of discrimination." However, no specific enforcement action has been taken against discriminatory AI hiring practices in Canada.

The structural concern is that AI hiring tools create a high-throughput discrimination machine. When a biased algorithm screens thousands of applications, the number of people affected is far larger than traditional human bias — and the discrimination is invisible because it occurs inside a black box that neither the employer nor the applicant can inspect. Canadian employers may be unknowingly violating human rights law by deploying tools they cannot audit, evaluate, or explain.

Harms

AI hiring systems trained on historical data reproduce and amplify existing discrimination patterns. Research documents that AI resume screeners disadvantage candidates with disabilities, non-Western names, and employment gaps, while video interview analysis tools correlate non-job-relevant behavioral signals with hiring recommendations.

Discrimination & RightsSignificantPopulation

12.2% of Canadian businesses used AI as of Q2 2024, with human resources among the most common applications. Candidates subjected to AI screening typically have no visibility into evaluation criteria and no meaningful recourse against algorithmic decisions.

Discrimination & RightsAutonomy UnderminedSignificantPopulation

Evidence

10 reports

  1. Media — Reuters (Oct 10, 2018)

    Amazon's AI recruitment tool systematically discriminated against women; abandoned by early 2017, publicly reported October 2018

  2. Official — Canadian Human Rights Commission (Jan 1, 2020)

    CHRC recognized that AI algorithms trained on historical data can perpetuate and amplify discrimination

  3. Official — Ontario Legislative Assembly (Mar 21, 2024)

    Ontario Bill 149 requires AI disclosure in job postings, effective January 1, 2026 — first in Canada

  4. Academic — University of Washington (Oct 1, 2024)

    3M+ comparisons across LLMs: white-associated names favored 85% of the time; female names favored only 11%; Black male names never favored over white male names

  5. Official — Statistics Canada (Nov 1, 2024)

    12.2% of Canadian businesses used AI as of Q2 2025, more than double from previous year; HR/recruitment among top applications

  6. Official — Ontario Human Rights Commission / Law Commission of Ontario (Nov 1, 2024)

    First Canadian AI impact assessment tool grounded in human rights law (voluntary)

  7. Official — New York City Council (Jul 5, 2023)

    NYC requires annual bias audits of AI hiring tools and candidate notice

  8. Official — Government of Canada (Jan 1, 2024)

    Prohibits employment discrimination on protected grounds regardless of whether decision is by human or algorithm

  9. Official — Public Service Commission of Canada (Jan 1, 2024)

    PSC guidance on AI in federal hiring processes

  10. Media — Fisher Phillips (May 1, 2025)

    Mobley v. Workday preliminary collective action certification; AI vendor potentially liable as employer 'agent' for discrimination

Record details

Policy Recommendationsassessed

Require bias auditing of AI tools used in hiring and recruitment decisions, modelled on NYC Local Law 144

NYC Council / EU AI Act

Mandate transparency notices to candidates when AI tools are used in hiring evaluation

NYC Local Law 144 / EU AI Act

CHRC to develop enforcement guidance for algorithmic discrimination in employment

Canadian Human Rights Commission

Editorial Assessment assessed

AI hiring tools create high-throughput discrimination: a biased algorithm screening thousands of applications affects far more people than traditional human bias, and the discrimination is invisible inside a black box. 12.2% of Canadian businesses use AI, with HR among top applications. The CHRC has recognized the risk but no enforcement action has been taken. NYC and the EU have established regulatory frameworks; Canada has none. This hazard is distinct from the existing salary-discrimination hazard (which concerns LLM advice) — it concerns access to employment itself.

Entities Involved

Related Records

Taxonomyassessed

Domain
EmploymentPublic Services
Harm type
Discrimination & RightsEconomic Harm
AI pathway
Training Data OriginDeployment ContextMonitoring Absent
Lifecycle phase
TrainingDeploymentMonitoring

Changelog

Changelog
VersionDateChange
v1Mar 10, 2026Initial publication

Version 1