Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Critical Confidence: high

Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.

Identified: January 6, 2025 Last assessed: March 10, 2026

Canada's only attempt at comprehensive AI legislation — the Artificial Intelligence and Data Act (AIDA), Part 3 of Bill C-27 — died on the Order Paper when Parliament was prorogued on January 6, 2025. As of March 2026, no replacement has been tabled. The current government has explicitly adopted a "light, tight, right" approach to AI regulation that the government describes as balancing economic opportunity with governance.

AIDA was introduced on June 16, 2022 as part of the Digital Charter Implementation Act. It was widely criticized by 45 civil society organizations — including Amnesty International, the Assembly of First Nations, the Canadian Labour Congress, and the Writers Guild of Canada — for lacking independent oversight, excluding government use of AI, narrowing harm definitions to quantifiable individual damages, and having been developed through closed consultation with industry. The bill remained in the INDU committee through 2024 without reaching a vote.

The prorogation also killed Bill C-63 (the Online Harms Act) and Bill C-26 (cybersecurity legislation), compounding the regulatory gap. AI Minister Evan Solomon, appointed May 2025 as Canada's first minister responsible for AI, confirmed that AIDA will not return in its original form. The government launched a 30-day "national sprint" in September 2025 and received over 11,300 public responses, but as of March 2026 no legislation has been tabled.

What Canada does have is partial and fragmented. The Directive on Automated Decision-Making (DADM), effective since April 2019, applies only to federal institutions using automated systems for administrative decisions — and the CRA is excluded by operation of its enabling legislation (Canada Revenue Agency Act, s. 30(2)). A Voluntary Code of Conduct on Generative AI, launched September 2023, has no enforcement mechanism. PIPEDA and provincial privacy laws (Quebec's Law 25, Alberta and BC's PIPA) provide some data protection but were not designed for AI. No federal law addresses AI safety, mandatory incident reporting, biometric surveillance, deepfakes, or autonomous systems.

The result is that every AI incident documented in CAIM occurred in a jurisdiction with no comprehensive AI governance framework. The RCMP deployed without public disclosure Clearview AI facial recognition. OpenAI detected but did not report a future mass shooter. The CRA served incorrect information to millions. IRCC deployed opaque algorithmic screening. In each case, the harm occurred in the absence of any applicable AI-specific regulatory framework.

A Leger poll in August 2025 found that 85% of Canadians believe AI tools should be regulated. A separate KPMG and University of Melbourne global study (surveying 1,025 Canadians in late 2024) found that 92% are unaware of any existing laws governing AI in Canada. The gap between public expectation and institutional reality is the defining feature of this hazard.

The current government's approach reflects a deliberate policy choice. Proponents argue that premature comprehensive legislation could stifle innovation and economic competitiveness, and that existing legal frameworks — privacy law, competition law, criminal law, consumer protection — already apply to AI systems. The Canadian AI Safety Institute (CAISI) and the Voluntary Code of Conduct represent non-legislative governance mechanisms. Critics counter that voluntary frameworks lack enforcement mechanisms and existing laws were not designed for AI-specific risks.

Harms

Canada has no comprehensive AI legislation, no independent AI oversight body with enforcement power, and no mandatory incident reporting for AI companies. Every AI incident in CAIM's dataset occurred under these conditions.

Editorial note:This is a structural condition, not a discrete event. Its severity derives from the cumulative effect across all domains where AI is deployed without governance.

Autonomy UnderminedSignificantPopulation

AIDA (Bill C-27 Part 3) died on the Order Paper in January 2025 after being criticized by 45 civil society organizations. No replacement legislation has been tabled as of March 2026, and the government's 'light, tight, right' approach signals comprehensive AI legislation is not a near-term priority.

Autonomy UnderminedSignificantPopulation

The federal DADM applies only to federal institutions, exempts major agencies, and has inconsistent compliance. Provincial and municipal AI deployments operate with no equivalent framework, creating fragmented governance across jurisdictions.

Autonomy UnderminedModeratePopulation

Evidence

12 reports

  1. LEGISinfo - Bill C-27 Primary source
    Official — Parliament of Canada (Jun 16, 2022)

    AIDA was introduced as Part 3 of Bill C-27 on June 16, 2022

  2. Other — Fasken (Jan 6, 2025)

    Bill C-27 including AIDA died when Parliament was prorogued January 6, 2025

  3. Media — CPA Ontario (Jun 1, 2025)

    Minister Solomon advocates 'light, tight, right' regulatory approach

  4. Other — KPMG Canada / University of Melbourne (Jun 1, 2025)

    92% of Canadians unaware of any existing AI laws, regulations, or policies

  5. Views on AI - August 2025 Primary source
    Other — Leger (Aug 25, 2025)

    85% of Canadians believe AI tools should be regulated by governments

  6. Other — Canadian Centre for Policy Alternatives (Feb 12, 2026)

    As of 2026, Canada has no meaningful AI regulation

  7. Official — Treasury Board of Canada Secretariat (Apr 1, 2019)

    DADM applies only to federal institutions for administrative decisions

  8. Official — ISED (Sep 1, 2023)

    Voluntary code has no enforcement mechanism

  9. Official — Innovation, Science and Economic Development Canada (Nov 20, 2024)

    CAISI established with initial budget of $50 million over five years

  10. Other — Montreal AI Ethics Institute (Feb 1, 2025)

    History and criticism of AIDA, 45 civil society organizations opposed

  11. Other — McInnes Cooper (Mar 1, 2025)

    Key criticisms of AIDA including lack of independent oversight and exclusion of government AI

  12. Media — BetaKit (Oct 1, 2025)

    Solomon confirmed AIDA will not return in its original form; experts warn Canada is behind peers

Record details

Responses & Outcomes

Government of CanadalegislationRepealed

Bill C-27 introduced June 16, 2022, including the Artificial Intelligence and Data Act as Part 3. Died on Order Paper January 6, 2025 when Parliament was prorogued.

AIDA never passed. Widely criticized for lacking independent oversight, excluding government AI use, and narrow harm definitions. Died before reaching a vote.

Government of CanadaguidanceActive

Launched September 2023. Entirely voluntary with no enforcement mechanism. Initial signatories included TELUS and BlackBerry; 30 signatories by December 2023.

No enforcement mechanism. Participation is voluntary. Does not address safety, incident reporting, or AI-specific harms.

Government of Canadainstitutional actionActive

Announced November 2024. Budget of $50 million over five years. Mandate to advance scientific understanding of AI risks. No enforcement power. Executive Director appointed February 2025.

Research-only mandate. No enforcement power, no regulatory authority. Modest budget relative to the scale of frontier AI development.

Government of Canadainstitutional actionCompletedUnknown

30-day public consultation launched September 2025 by Minister Solomon. Received 11,000+ responses. Results released February 2026. No legislation tabled as of March 2026.

Consultation completed but has not yet produced legislation or binding policy.

Policy Recommendationsassessed

Comprehensive federal AI legislation with risk-based tiering, independent oversight, and enforcement mechanisms

Canadian Centre for Policy Alternatives (Feb 12, 2026)

Recognize privacy as a fundamental right and make privacy legislation the foundation of AI regulation

Office of the Privacy Commissioner of Canada (Feb 2, 2026)

Mandatory pre-deployment safety evaluation and algorithmic impact assessment for high-risk AI systems, extending beyond federal government

45 civil society organizations (September 2023 open letter) (Sep 1, 2023)

International treaties for AI governance, not just domestic rules

Yoshua Bengio (Oct 1, 2025)

Editorial Assessment assessed

Multiple AI-related incidents documented in CAIM — including law enforcement deployment of facial recognition, an AI company's decision not to report a safety-relevant finding, and a government chatbot providing incorrect information to millions — occurred in the absence of AI-specific regulatory frameworks. Canada's only attempt at comprehensive AI legislation (AIDA) lapsed in January 2025 and no replacement has been tabled. The current government has adopted a 'light, tight, right' approach, relying on existing laws and voluntary frameworks. Public opinion surveys indicate 85% support for AI regulation, while 92% of respondents are unaware of any existing AI laws.

Entities Involved

Related Records

Taxonomyassessed

Domain
Public ServicesDefence & SecurityLaw EnforcementFinance & BankingHealthcareEducationEmployment
Harm type
Other
AI pathway
Monitoring AbsentOversight Absent
Lifecycle phase
DeploymentMonitoring

Changelog

Changelog
VersionDateChange
v1Mar 10, 2026Initial draft — agent-authored, requires editorial review

Version 1