AI Governance Gap in Canada
Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.
Canada's only attempt at comprehensive AI legislation — the Artificial Intelligence and Data Act (AIDA), Part 3 of Bill C-27 — died on the Order Paper when Parliament was prorogued on January 6, 2025. As of March 2026, no replacement has been tabled. The current government has explicitly adopted a "light, tight, right" approach to AI regulation that the government describes as balancing economic opportunity with governance.
AIDA was introduced on June 16, 2022 as part of the Digital Charter Implementation Act. It was widely criticized by 45 civil society organizations — including Amnesty International, the Assembly of First Nations, the Canadian Labour Congress, and the Writers Guild of Canada — for lacking independent oversight, excluding government use of AI, narrowing harm definitions to quantifiable individual damages, and having been developed through closed consultation with industry. The bill remained in the INDU committee through 2024 without reaching a vote.
The prorogation also killed Bill C-63 (the Online Harms Act) and Bill C-26 (cybersecurity legislation), compounding the regulatory gap. AI Minister Evan Solomon, appointed May 2025 as Canada's first minister responsible for AI, confirmed that AIDA will not return in its original form. The government launched a 30-day "national sprint" in September 2025 and received over 11,300 public responses, but as of March 2026 no legislation has been tabled.
What Canada does have is partial and fragmented. The Directive on Automated Decision-Making (DADM), effective since April 2019, applies only to federal institutions using automated systems for administrative decisions — and the CRA is excluded by operation of its enabling legislation (Canada Revenue Agency Act, s. 30(2)). A Voluntary Code of Conduct on Generative AI, launched September 2023, has no enforcement mechanism. PIPEDA and provincial privacy laws (Quebec's Law 25, Alberta and BC's PIPA) provide some data protection but were not designed for AI. No federal law addresses AI safety, mandatory incident reporting, biometric surveillance, deepfakes, or autonomous systems.
The result is that every AI incident documented in CAIM occurred in a jurisdiction with no comprehensive AI governance framework. The RCMP deployed without public disclosure Clearview AI facial recognition. OpenAI detected but did not report a future mass shooter. The CRA served incorrect information to millions. IRCC deployed opaque algorithmic screening. In each case, the harm occurred in the absence of any applicable AI-specific regulatory framework.
A Leger poll in August 2025 found that 85% of Canadians believe AI tools should be regulated. A separate KPMG and University of Melbourne global study (surveying 1,025 Canadians in late 2024) found that 92% are unaware of any existing laws governing AI in Canada. The gap between public expectation and institutional reality is the defining feature of this hazard.
The current government's approach reflects a deliberate policy choice. Proponents argue that premature comprehensive legislation could stifle innovation and economic competitiveness, and that existing legal frameworks — privacy law, competition law, criminal law, consumer protection — already apply to AI systems. The Canadian AI Safety Institute (CAISI) and the Voluntary Code of Conduct represent non-legislative governance mechanisms. Critics counter that voluntary frameworks lack enforcement mechanisms and existing laws were not designed for AI-specific risks.
Harms
Canada has no comprehensive AI legislation, no independent AI oversight body with enforcement power, and no mandatory incident reporting for AI companies. Every AI incident in CAIM's dataset occurred under these conditions.
Editorial note:This is a structural condition, not a discrete event. Its severity derives from the cumulative effect across all domains where AI is deployed without governance.
AIDA (Bill C-27 Part 3) died on the Order Paper in January 2025 after being criticized by 45 civil society organizations. No replacement legislation has been tabled as of March 2026, and the government's 'light, tight, right' approach signals comprehensive AI legislation is not a near-term priority.
The federal DADM applies only to federal institutions, exempts major agencies, and has inconsistent compliance. Provincial and municipal AI deployments operate with no equivalent framework, creating fragmented governance across jurisdictions.
Evidence
12 reports
- LEGISinfo - Bill C-27 Primary source
AIDA was introduced as Part 3 of Bill C-27 on June 16, 2022
- Prorogation's Digital Impact: Bills C-27, C-63, C-26, and More Die on the Order Paper Primary source
Bill C-27 including AIDA died when Parliament was prorogued January 6, 2025
-
Minister Solomon advocates 'light, tight, right' regulatory approach
-
92% of Canadians unaware of any existing AI laws, regulations, or policies
- Views on AI - August 2025 Primary source
85% of Canadians believe AI tools should be regulated by governments
- Canada still has no meaningful AI regulation Primary source
As of 2026, Canada has no meaningful AI regulation
-
DADM applies only to federal institutions for administrative decisions
-
Voluntary code has no enforcement mechanism
-
CAISI established with initial budget of $50 million over five years
-
History and criticism of AIDA, 45 civil society organizations opposed
-
Key criticisms of AIDA including lack of independent oversight and exclusion of government AI
-
Solomon confirmed AIDA will not return in its original form; experts warn Canada is behind peers
Record details
Responses & Outcomes
Bill C-27 introduced June 16, 2022, including the Artificial Intelligence and Data Act as Part 3. Died on Order Paper January 6, 2025 when Parliament was prorogued.
AIDA never passed. Widely criticized for lacking independent oversight, excluding government AI use, and narrow harm definitions. Died before reaching a vote.
Launched September 2023. Entirely voluntary with no enforcement mechanism. Initial signatories included TELUS and BlackBerry; 30 signatories by December 2023.
No enforcement mechanism. Participation is voluntary. Does not address safety, incident reporting, or AI-specific harms.
Announced November 2024. Budget of $50 million over five years. Mandate to advance scientific understanding of AI risks. No enforcement power. Executive Director appointed February 2025.
Research-only mandate. No enforcement power, no regulatory authority. Modest budget relative to the scale of frontier AI development.
30-day public consultation launched September 2025 by Minister Solomon. Received 11,000+ responses. Results released February 2026. No legislation tabled as of March 2026.
Consultation completed but has not yet produced legislation or binding policy.
Policy Recommendationsassessed
Comprehensive federal AI legislation with risk-based tiering, independent oversight, and enforcement mechanisms
Canadian Centre for Policy Alternatives (Feb 12, 2026)Recognize privacy as a fundamental right and make privacy legislation the foundation of AI regulation
Office of the Privacy Commissioner of Canada (Feb 2, 2026)Mandatory pre-deployment safety evaluation and algorithmic impact assessment for high-risk AI systems, extending beyond federal government
45 civil society organizations (September 2023 open letter) (Sep 1, 2023)International treaties for AI governance, not just domestic rules
Yoshua Bengio (Oct 1, 2025)Editorial Assessment assessed
Multiple AI-related incidents documented in CAIM — including law enforcement deployment of facial recognition, an AI company's decision not to report a safety-relevant finding, and a government chatbot providing incorrect information to millions — occurred in the absence of AI-specific regulatory frameworks. Canada's only attempt at comprehensive AI legislation (AIDA) lapsed in January 2025 and no replacement has been tabled. The current government has adopted a 'light, tight, right' approach, relying on existing laws and voluntary frameworks. Public opinion surveys indicate 85% support for AI regulation, while 92% of respondents are unaware of any existing AI laws.
Entities Involved
Related Records
- AI Safety Reporting and Disclosure Gapsrelated
- Biometric Surveillance Technology Deployment in Canadarelated
- AI in Canadian Government Automated Decision-Makingrelated
- AI Confabulation in Consequential Canadian Contextsrelated
- Edmonton Police First to Deploy Facial Recognition Body Cameras; Privacy Commissioner Says Approval Not Obtainedrelated
- Three Ontario Regional Police Services Built a Shared Facial Recognition Database of 1.6 Million Imagesrelated
- Frontier AI Models Demonstrating Deceptive and Self-Preserving Behaviorrelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- Canada's Dependency on Foreign AI Infrastructurerelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 10, 2026 | Initial draft — agent-authored, requires editorial review |