Clinical AI Systems in Canada: Deployed with Documented Evidence Gaps and Privacy Violations
AI systems are in clinical use in Canadian healthcare for virtual care, stroke detection, and clinical documentation. Alberta's privacy commissioner found a virtual care platform used facial recognition without adequate consent and shared health information internationally without patient disclosure (31 findings). An AI scribe bot autonomously recorded and disseminated patient information at an Ontario hospital. Canada's national HTA body found no evidence meeting its review criteria on patient outcomes for a licensed Class III AI stroke detection device. Health Canada's regulatory framework exempts AI clinical decision support software from medical device oversight.
AI systems are in clinical use in Canadian healthcare — for virtual care, stroke detection, clinical documentation, and decision support. Provincial privacy investigations and a national health technology assessment have issued findings on these systems.
Alberta's Information and Privacy Commissioner issued two investigation reports (P2021-IR-02 and H2021-IR-01) on TELUS Health's Babylon virtual care platform, with 31 findings and 20 recommendations. The platform was promoted by provincial health services as a virtual care option. The investigations found that the platform used facial recognition for identity verification without notification or consent meeting the requirements of Alberta's Personal Information Protection Act and Health Information Act, shared personal health information with third-party service providers in the United States and Ireland without disclosing this to patients, and retained audio and video recordings of patient consultations beyond what the Commissioner determined was necessary for the stated purposes. TELUS Health Babylon launched the service before the OIPC had completed its review of the mandatory privacy impact assessments that had been submitted.
In September 2024, an Otter.ai AI notetaker bot autonomously joined a virtual hepatology rounds meeting at an Ontario hospital, recorded physicians discussing seven patients by name — including diagnoses and treatments — and emailed the transcript to 65 people, including a former physician who had left the hospital in June 2023. The bot joined via this physician's personal Otter.ai account, which was linked to a personal email calendar that still contained the recurring meeting invite. Ontario's Information and Privacy Commissioner investigated (HR24-00691). The hospital blocked AI scribe tools on its network. In January 2026, the IPC issued sector-wide guidance on AI scribes in healthcare.
CDA-AMC (formerly CADTH), Canada's national health technology assessment body, assessed RapidAI — a Class III medical device licensed by Health Canada for stroke detection. The assessment found no evidence meeting its review criteria on effects on patient harms, mortality, health-related quality of life, length of hospital stay, or cost-effectiveness. The expert review panel (HTERP) recommended that sites already using RapidAI continue to do so alongside clinician interpretation of imaging, but stated it could not recommend for or against new implementation at sites not already using the system. RapidAI is licensed for clinical use in Canadian hospitals.
Health Canada's regulatory framework for software as a medical device exempts software that is "only intended to support" clinical decision-making and is "not intended to replace clinical judgment." The Canadian Medical Protective Association has stated that physicians have "limited guidance on evaluating or mitigating the risks associated with AI tools" and that "a comprehensive regulatory framework for AI remains a work in progress."
Harms
TELUS Health Babylon used facial recognition for identity verification without notification or consent meeting Alberta HIA requirements
TELUS Health Babylon shared personal health information with third parties in the US and Ireland without disclosing this to patients
AI scribe bot autonomously recorded physicians discussing seven patients and emailed the transcript to 65 people including a former employee
National HTA body found no evidence meeting its review criteria on patient outcomes for a licensed Class III AI stroke detection device
Evidence
7 reports
-
TELUS Health Babylon: facial recognition without adequate consent, cross-border health data sharing without disclosure, retention beyond necessity, launched before OIPC review of submitted privacy impact assessments was completed. 31 findings, 20 recommendations.
- Hospital privacy breach involving an AI scribes tool Primary source
Otter.ai bot autonomously joined hospital hepatology rounds, recorded seven patients by name, emailed transcript to 65 people including former employee
- RapidAI for Stroke Detection: Health Technology Assessment Primary source
National HTA found no evidence meeting review criteria on patient outcomes for licensed Class III AI stroke detection device; expert panel could not recommend for or against implementation
-
Media coverage of Alberta OIPC investigation of TELUS Health Babylon
-
Health Canada exempts clinical decision support software from medical device classification; documents regulatory gap for health-adjacent AI
-
Physicians have limited guidance on evaluating or mitigating AI risks; comprehensive regulatory framework for AI remains a work in progress
-
Sector-wide guidance on AI scribes in healthcare following investigation
Record details
Policy Recommendationsassessed
Mandatory privacy impact assessments for AI health platforms before launch
Alberta OIPC (investigation recommendation) (Jul 1, 2021)Sector-wide guidance on AI scribes in healthcare
Ontario IPC (Jan 28, 2026)Comprehensive regulatory framework for AI in healthcare beyond medical device classification
Canadian Medical Protective Association (Jan 1, 2025)Editorial Assessment assessed
The documented findings span three categories: evidence gaps (a licensed AI medical device for which the national HTA body found no outcome evidence meeting its review criteria), privacy violations (a virtual care platform sharing health data internationally without disclosure and operating without a mandatory privacy impact assessment), and autonomous AI action in clinical environments (an AI tool recording and disseminating patient information without clinician initiation). Health Canada's regulatory framework exempts AI software classified as clinical decision support from medical device oversight, meaning some AI tools used in clinical settings do not undergo the safety evaluation required of medical devices. The CMPA's statement that healthcare providers lack guidance on AI risk evaluation indicates that the absence of guidance extends beyond legislation to clinical practice standards.
Related Records
- AI Confabulation in Consequential Canadian Contextsrelated
- Agentic AI Deployment Outpacing Governance Frameworksrelated
- AI-Driven Cognitive Deskilling and Automation Over-Reliancerelated