Législation complète en matière d'IA au Canada
Le seul projet de loi sur l'IA du Canada (LIAD) est mort lorsque le Parlement a été prorogé en janvier 2025. Aucun remplacement n'a été déposé. 85 % des Canadiens soutiennent la réglementation de l'IA; 92 % ignorent l'existence de toute loi régissant l'IA.
Canada's only attempt at comprehensive AI legislation — the Artificial Intelligence and Data Act (AIDA), Part 3 of Bill C-27 — died on the Order Paper when Parliament was prorogued on January 6, 2025. As of March 2026, no replacement has been tabled. The current government has explicitly adopted a "light, tight, right" approach to AI regulation that the government describes as balancing economic opportunity with governance.
AIDA was introduced on June 16, 2022 as part of the Digital Charter Implementation Act. It was widely criticized by 45 civil society organizations — including Amnesty International, the Assembly of First Nations, the Canadian Labour Congress, and the Writers Guild of Canada — for lacking independent oversight, excluding government use of AI, narrowing harm definitions to quantifiable individual damages, and having been developed through closed consultation with industry. The bill remained in the INDU committee through 2024 without reaching a vote.
The prorogation also killed Bill C-63 (the Online Harms Act) and Bill C-26 (cybersecurity legislation), compounding the regulatory gap. AI Minister Evan Solomon, appointed May 2025 as Canada's first minister responsible for AI, confirmed that AIDA will not return in its original form. The government launched a 30-day "national sprint" in September 2025 and received over 11,300 public responses, but as of March 2026 no legislation has been tabled.
What Canada does have is partial and fragmented. The Directive on Automated Decision-Making (DADM), effective since April 2019, applies only to federal institutions using automated systems for administrative decisions — and the CRA is excluded by operation of its enabling legislation (Canada Revenue Agency Act, s. 30(2)). A Voluntary Code of Conduct on Generative AI, launched September 2023, has no enforcement mechanism. PIPEDA and provincial privacy laws (Quebec's Law 25, Alberta and BC's PIPA) provide some data protection but were not designed for AI. No federal law addresses AI safety, mandatory incident reporting, biometric surveillance, deepfakes, or autonomous systems.
The result is that every AI incident documented in CAIM occurred in a jurisdiction with no comprehensive AI governance framework. The RCMP deployed without public disclosure Clearview AI facial recognition. OpenAI detected but did not report a future mass shooter. The CRA served incorrect information to millions. IRCC deployed opaque algorithmic screening. In each case, the harm occurred in the absence of any applicable AI-specific regulatory framework.
A Leger poll in August 2025 found that 85% of Canadians believe AI tools should be regulated. A separate KPMG and University of Melbourne global study (surveying 1,025 Canadians in late 2024) found that 92% are unaware of any existing laws governing AI in Canada. The gap between public expectation and institutional reality is the defining feature of this hazard.
The current government's approach reflects a deliberate policy choice. Proponents argue that premature comprehensive legislation could stifle innovation and economic competitiveness, and that existing legal frameworks — privacy law, competition law, criminal law, consumer protection — already apply to AI systems. The Canadian AI Safety Institute (CAISI) and the Voluntary Code of Conduct represent non-legislative governance mechanisms. Critics counter that voluntary frameworks lack enforcement mechanisms and existing laws were not designed for AI-specific risks.
Préjudices
Le Canada n'a aucune législation complète en matière d'IA, aucun organisme indépendant de surveillance avec pouvoir d'application, et aucune obligation de signalement d'incidents pour les entreprises d'IA. Chaque incident d'IA dans le jeu de données du CAIM s'est produit dans ces conditions.
Note éditoriale:Il s'agit d'une condition structurelle, non d'un événement discret. Sa gravité découle de l'effet cumulatif dans tous les domaines où l'IA est déployée sans gouvernance.
La LIAD (projet de loi C-27, partie 3) est morte au Feuilleton en janvier 2025 après avoir été critiquée par 45 organisations de la société civile. Aucune législation de remplacement n'a été déposée en date de mars 2026, et l'approche « légère, ciblée et juste » du gouvernement signale que la législation complète en matière d'IA n'est pas une priorité à court terme.
La DPDA fédérale ne s'applique qu'aux institutions fédérales, exempte des agences majeures et connaît une conformité incohérente. Les déploiements d'IA provinciaux et municipaux fonctionnent sans cadre équivalent, créant une gouvernance fragmentée entre les juridictions.
Preuves
12 rapports
- LEGISinfo - Bill C-27 Source principale
AIDA was introduced as Part 3 of Bill C-27 on June 16, 2022
- Prorogation's Digital Impact: Bills C-27, C-63, C-26, and More Die on the Order Paper Source principale
Bill C-27 including AIDA died when Parliament was prorogued January 6, 2025
-
Minister Solomon advocates 'light, tight, right' regulatory approach
- Canada is lagging behind global peers in AI trust and literacy Source principale
92% of Canadians unaware of any existing AI laws, regulations, or policies
- Views on AI - August 2025 Source principale
85% of Canadians believe AI tools should be regulated by governments
- Canada still has no meaningful AI regulation Source principale
As of 2026, Canada has no meaningful AI regulation
-
DADM applies only to federal institutions for administrative decisions
-
Voluntary code has no enforcement mechanism
-
CAISI established with initial budget of $50 million over five years
-
History and criticism of AIDA, 45 civil society organizations opposed
-
Key criticisms of AIDA including lack of independent oversight and exclusion of government AI
-
Solomon confirmed AIDA will not return in its original form; experts warn Canada is behind peers
Détails de la fiche
Réponses et résultats
Projet de loi C-27 déposé le 16 juin 2022, incluant la Loi sur l'intelligence artificielle et les données. Mort au Feuilleton le 6 janvier 2025.
La LIAD n'a jamais été adoptée. Largement critiquée et morte avant un vote.
Lancé en septembre 2023. Entièrement volontaire sans mécanisme d'application.
Aucun mécanisme d'application. La participation est volontaire.
Annoncé en novembre 2024. Budget de 50 millions de dollars sur cinq ans. Aucun pouvoir d'application.
Mandat de recherche uniquement. Aucun pouvoir d'application.
Consultation publique de 30 jours lancée en septembre 2025. Plus de 11 000 réponses reçues. Aucune législation déposée en mars 2026.
Consultation terminée mais n'a pas encore produit de législation.
Recommandations de politiqueévalué
Comprehensive federal AI legislation with risk-based tiering, independent oversight, and enforcement mechanisms
Canadian Centre for Policy Alternatives (12 févr. 2026)Recognize privacy as a fundamental right and make privacy legislation the foundation of AI regulation
Office of the Privacy Commissioner of Canada (2 févr. 2026)Mandatory pre-deployment safety evaluation and algorithmic impact assessment for high-risk AI systems, extending beyond federal government
45 civil society organizations (September 2023 open letter) (1 sept. 2023)International treaties for AI governance, not just domestic rules
Yoshua Bengio (1 oct. 2025)Évaluation éditoriale évalué
Chaque incident documenté par le CAIM s'est produit dans une juridiction sans gouvernance complète de l'IA. La GRC a secrètement déployé la reconnaissance faciale. OpenAI a détecté sans signaler un futur tireur de masse. L'ARC a fourni des informations erronées à des millions de personnes. À mesure que l'IA devient plus puissante, le même vide s'applique à des déploiements plus conséquents.
Entités impliquées
Fiches connexes
- AI Safety Reporting and Disclosure Gapsrelated
- Biometric Surveillance Technology Deployment in Canadarelated
- AI in Canadian Government Automated Decision-Makingrelated
- AI Confabulation in Consequential Canadian Contextsrelated
- Edmonton Police First to Deploy Facial Recognition Body Cameras; Privacy Commissioner Says Approval Not Obtainedrelated
- Three Ontario Regional Police Services Built a Shared Facial Recognition Database of 1.6 Million Imagesrelated
- Frontier AI Models Demonstrating Deceptive and Self-Preserving Behaviorrelated
- IRCC Machine-Learning Triage Sorts Millions of Visa Applications Using Models Trained on Historical Decisionsrelated
- Canada's Dependency on Foreign AI Infrastructurerelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 10 mars 2026 | Initial draft — agent-authored, requires editorial review |