Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Active Severe Confidence: high

AI systems are being applied to Indigenous peoples in Canada in policing and data collection without cross-cultural validation or First Nations data governance. The Citizen Lab documented AI-driven predictive policing tools creating discriminatory feedback loops through historical data. The International Association of Privacy Professionals has reported that data from remote Indigenous communities is routinely absorbed to train AI systems without community consent. Courts and human rights bodies have found that rule-based predecessors to these AI tools produced discriminatory outcomes for Indigenous peoples — establishing legal precedents directly applicable to AI systems entering the same domains.

Identified: June 13, 2018 Last assessed: March 11, 2026

AI systems are being applied to Indigenous peoples in Canada in policing, data collection, and service delivery, often without cross-cultural validation, community governance, or recognition of First Nations data sovereignty.

Legal and institutional context

Algorithmic and actuarial tools have a documented history of producing discriminatory outcomes for Indigenous peoples in Canada. In Ewert v. Canada (2018 SCC 30), the Supreme Court of Canada ruled 7-2 on the statutory claim that the Correctional Service of Canada breached its obligation under s. 24(1) of the Corrections and Conditional Release Act to take all reasonable steps to ensure that any information about an offender that it uses is as accurate, up to date and complete as possible. The tools at issue — the PCL-R, VRAG, SORAG, Static-99, and VRS-SO — were actuarial and psychological scoring instruments developed and validated on predominantly non-Indigenous populations. While these specific instruments are rule-based tools outside CAIM's AI system scope, the ruling establishes a legal precedent directly relevant to AI-based risk assessment: systems trained or validated without adequate representation of Indigenous populations may breach statutory obligations when applied to Indigenous peoples.

The Ontario Human Rights Commission's 2018 report "Interrupted Childhoods" found Indigenous children overrepresented in admissions into care at 93% of agencies surveyed, with proportions 2.6 times higher than their share of the child population. The report identified risk assessment tools reflecting "White, Western, Christian notions of acceptable child rearing" as a contributing factor. A separate analysis (Fallon et al., 2016, CWRP Information Sheet #176E) found that Aboriginal children were more than 130% more likely to be investigated than White children and 168% more likely to be placed in out-of-home care. The specific tools involved are structured decision-making instruments rather than AI systems, but the documented pattern of cross-cultural bias in risk scoring has direct implications for AI-based tools now entering these domains.

AI-specific harms

The Citizen Lab at the University of Toronto and the International Human Rights Program published "To Surveil and Predict" (2020), a human rights analysis of algorithmic policing in Canada. The report documented AI-driven predictive policing tools and bail risk algorithms being used in ways that affect Indigenous peoples, including monitoring of Indigenous rights protesters. The report stated that historical policing data reflects patterns of systemic discrimination, and identified negative feedback loops: communities with higher rates of police contact generate more data, which machine learning models interpret as indicating higher risk — a dynamic the report characterized as reinforcing discriminatory patterns.

The International Association of Privacy Professionals has reported that information from individuals using AI-driven services in remote Indigenous communities is "routinely absorbed to train and refine AI systems" without community governance. The First Nations Information Governance Centre's Data Sovereignty Research Collaborative addresses AI and big data within the context of OCAP principles (Ownership, Control, Access, Possession) — a First Nations data governance framework.

First Nations governance responses

The Assembly of First Nations submitted a formal brief to the House of Commons Standing Committee on Industry and Technology (INDU) regarding Bill C-27 (AIDA), stating that "AI has the potential to destroy First Nations' cultures, threaten First Nations' security, and increase demand for our resources." The AFN stated that there had been no Nation-to-Nation consultation between Canada and First Nations on the legislation. Bill C-27 subsequently died on the Order Paper when Parliament was prorogued on January 6, 2025; the AFN's position on Nation-to-Nation consultation applies to any successor AI legislation.

The Chiefs of Ontario Research and Data Management Sector published a research paper in 2024 analyzing the effects of AI on First Nations in Ontario, describing AI as "a powerful and disruptive technology" that comes "paired with serious risks for First Nations."

The First Nations of Quebec and Labrador Health and Social Services Commission (CSSSPNQL) published a position paper on digital and AI ethics, establishing guidelines to "guide digital development in harmony with the values of First Nations."

Harms

AI-driven predictive policing tools and bail risk algorithms that disproportionately affect marginalized communities, including Indigenous peoples, through reliance on historical data reflecting patterns of systemic discrimination. Machine learning models trained on biased historical policing data create negative feedback loops — a dynamic the Citizen Lab report characterized as reinforcing discriminatory patterns.

Disproportionate SurveillanceDiscrimination & RightsSignificantPopulation

Data from Indigenous communities using AI-driven services absorbed to train AI systems without community governance, OCAP recognition, or consent (IAPP reporting). First Nations data sovereignty principles (OCAP) are not reflected in the data practices of AI service providers operating in remote communities.

Privacy & Data ExposureModeratePopulation

Risk that AI-based risk assessment tools entering justice, child welfare, and policing domains will reproduce the same cross-cultural validation failures documented in rule-based predecessors (Ewert v. Canada, OHRC findings), with machine learning amplifying bias through automated scale and feedback loops rather than static scoring.

Discrimination & RightsSignificantPopulation

Evidence

9 reports

  1. Regulatory — Ontario Human Rights Commission (Apr 12, 2018)

    Indigenous children overrepresented in admissions into care at 93% of agencies (25 of 27), proportions 2.6 times higher than child population share; risk assessment tools reflecting White, Western, Christian notions of acceptable child rearing identified as contributing factor

  2. Court — Supreme Court of Canada (Jun 13, 2018)

    SCC declared (7-2) that CSC breached its obligation under s. 24(1) CCRA by using actuarial risk assessment tools (including Static-99) developed on non-Indigenous populations for Indigenous offenders without evaluating cross-cultural validity

  3. Academic — Citizen Lab & International Human Rights Program (IHRP), University of Toronto (Sep 1, 2020)

    Algorithmic tools used to monitor Indigenous rights protesters and assess bail risk using historical data reflecting systemic discrimination; identified negative feedback loops in policing

  4. Academic — Canadian Child Welfare Research Portal (Fallon, Black, Van Wert, King, Filippelli, Lee, & Moody) (Jan 1, 2016)

    Aboriginal children more than 130% more likely to be investigated than White children, 40% more likely to be transferred to ongoing services, 168% more likely to be placed in out-of-home care during investigation

  5. Media — CBC News (Jun 13, 2018)

    Media coverage of Ewert v. Canada SCC ruling on risk assessment tools and Indigenous offenders

  6. Submission — Assembly of First Nations (Oct 1, 2023)

    AI has the potential to destroy First Nations cultures; no Nation-to-Nation consultation on legislation

  7. Official — Chiefs of Ontario Research and Data Management Sector (Sep 26, 2024)

    AI described as a powerful and disruptive technology paired with serious risks for First Nations

  8. Official — CSSSPNQL (Jun 4, 2025)

    First Nations-authored AI ethics framework establishing guidelines to guide digital development in harmony with First Nations values

  9. Media — International Association of Privacy Professionals (Nov 5, 2025)

    Information from individuals using AI-driven services in remote Indigenous communities routinely absorbed to train AI systems without community governance

Record details

Policy Recommendationsassessed

Moratorium on AI-based predictive policing tools using historical data, pending independent review of algorithmic bias and cross-cultural validation

Citizen Lab, To Surveil and Predict (2020) (Sep 1, 2020)

Nation-to-Nation consultation on AI legislation affecting First Nations

Assembly of First Nations (parliamentary brief on Bill C-27) (Oct 1, 2023)

Require OCAP-compliant data governance for AI systems processing First Nations data, with co-development of digital ethics guidelines by First Nations communities

CSSSPNQL position paper on digital and AI ethics (Jan 1, 2024)

Editorial Assessment assessed

Indigenous peoples in Canada hold distinct constitutional rights (s.35 of the Constitution Act, 1982) and governance structures, including First Nations data governance frameworks such as OCAP. Algorithmic systems applied in justice, child welfare, and policing do not incorporate these distinct legal and governance contexts. The Supreme Court's declaration in Ewert established that CSC breached its statutory obligation by using risk assessment tools on Indigenous offenders without evaluating their cross-cultural validity — but this finding applies to federal corrections and has not been extended to other domains where similar tools are in use. The OHRC's finding that child welfare risk tools contribute to Indigenous overrepresentation in care, and the Citizen Lab's documentation of algorithmic policing using data reflecting historical patterns of police contact, indicate that the same structural condition — algorithmic tools applied without accounting for the distinct circumstances of Indigenous peoples — is present across multiple domains.

Related Records

Taxonomyassessed

Domain
JusticeSocial ServicesLaw EnforcementPublic Services
Harm type
Discrimination & RightsDisproportionate SurveillancePrivacy & Data Exposure
AI pathway
Deployment ContextTraining Data OriginOversight AbsentMonitoring Absent
Lifecycle phase
DeploymentData CollectionTraining

Version 1