Canada's only AI bill (AIDA) lapsed when Parliament was prorogued in January 2025. No replacement has been tabled. The government has adopted a 'light, tight, right' approach. 85% of Canadians support AI regulation; 92% are unaware of any existing AI laws.
Public ServicesDefence & SecurityLaw EnforcementFinance & BankingHealthcareEducationEmployment
Multiple frontier AI models have demonstrated deceptive and self-preserving behavior in controlled evaluations. Mila co-authored foundational research. These models are available to millions of Canadians. No Canadian law specifically addresses evaluation or disclosure requirements for AI systems exhibiting deceptive behavior.
Defence & SecurityPublic Services
Canada's signals intelligence agency assesses AI is 'almost certainly' enhancing cyberattacks against Canadian targets. State actors and criminal groups are operationally using AI in cyber operations. Canadian critical infrastructure has already been breached by hacktivists reaching safety-critical industrial control systems.
Critical InfrastructureDefence & SecurityTelecommunications
AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.
HealthcareSocial Services
Canada's policy commits to appropriate human involvement in lethal force, but allied militaries are deploying AI targeting systems (Maven, Lavender) that compress decision cycles from weeks to minutes. No framework exists for CAF to manage this gap in coalition operations.
Defence & Security
Frontier AI models are demonstrating capabilities relevant to biological and chemical weapon development that multiple developers cannot confidently exclude as providing meaningful uplift. Canada hosts BSL-4 infrastructure with proven insider-threat history, chairs the international assessment identifying this risk, and signed commitments recognizing it — ; it has no dedicated AI-biosecurity assessment or evaluation mandate.
HealthcareDefence & Security