Systèmes d'IA cliniques au Canada : déployés avec des lacunes documentées en matière de preuves et des violations de la vie privée
Des systèmes d'IA sont utilisés en clinique au Canada pour les soins virtuels, la détection d'AVC et la documentation clinique. Le commissaire à la vie privée de l'Alberta a constaté qu'une plateforme de soins virtuels utilisait la reconnaissance faciale sans consentement adéquat et partageait des renseignements de santé à l'international sans divulgation aux patients (31 conclusions). Un robot scribe IA a enregistré et diffusé de façon autonome des renseignements sur des patients dans un hôpital ontarien. L'organisme national d'évaluation des technologies de la santé n'a trouvé aucune preuve répondant à ses critères sur les résultats pour les patients d'un dispositif d'IA de classe III homologué pour la détection d'AVC.
AI systems are in clinical use in Canadian healthcare — for virtual care, stroke detection, clinical documentation, and decision support. Provincial privacy investigations and a national health technology assessment have issued findings on these systems.
Alberta's Information and Privacy Commissioner issued two investigation reports (P2021-IR-02 and H2021-IR-01) on TELUS Health's Babylon virtual care platform, with 31 findings and 20 recommendations. The platform was promoted by provincial health services as a virtual care option. The investigations found that the platform used facial recognition for identity verification without notification or consent meeting the requirements of Alberta's Personal Information Protection Act and Health Information Act, shared personal health information with third-party service providers in the United States and Ireland without disclosing this to patients, and retained audio and video recordings of patient consultations beyond what the Commissioner determined was necessary for the stated purposes. TELUS Health Babylon launched the service before the OIPC had completed its review of the mandatory privacy impact assessments that had been submitted.
In September 2024, an Otter.ai AI notetaker bot autonomously joined a virtual hepatology rounds meeting at an Ontario hospital, recorded physicians discussing seven patients by name — including diagnoses and treatments — and emailed the transcript to 65 people, including a former physician who had left the hospital in June 2023. The bot joined via this physician's personal Otter.ai account, which was linked to a personal email calendar that still contained the recurring meeting invite. Ontario's Information and Privacy Commissioner investigated (HR24-00691). The hospital blocked AI scribe tools on its network. In January 2026, the IPC issued sector-wide guidance on AI scribes in healthcare.
CDA-AMC (formerly CADTH), Canada's national health technology assessment body, assessed RapidAI — a Class III medical device licensed by Health Canada for stroke detection. The assessment found no evidence meeting its review criteria on effects on patient harms, mortality, health-related quality of life, length of hospital stay, or cost-effectiveness. The expert review panel (HTERP) recommended that sites already using RapidAI continue to do so alongside clinician interpretation of imaging, but stated it could not recommend for or against new implementation at sites not already using the system. RapidAI is licensed for clinical use in Canadian hospitals.
Health Canada's regulatory framework for software as a medical device exempts software that is "only intended to support" clinical decision-making and is "not intended to replace clinical judgment." The Canadian Medical Protective Association has stated that physicians have "limited guidance on evaluating or mitigating the risks associated with AI tools" and that "a comprehensive regulatory framework for AI remains a work in progress."
Préjudices
TELUS Santé Babylon a utilisé la reconnaissance faciale pour la vérification d'identité sans notification ni consentement conformes aux exigences de la HIA de l'Alberta
TELUS Santé Babylon a partagé des renseignements personnels sur la santé avec des tiers aux États-Unis et en Irlande sans le divulguer aux patients
Un robot scribe IA a enregistré de façon autonome des médecins discutant de sept patients et a envoyé la transcription par courriel à 65 personnes dont un ancien employé
L'organisme national d'évaluation des technologies de la santé n'a trouvé aucune preuve répondant à ses critères sur les résultats pour les patients d'un dispositif d'IA de classe III homologué pour la détection d'AVC
Preuves
7 rapports
- Commissioner Releases Babylon by Telus Health Investigation Reports Source principale
TELUS Health Babylon: facial recognition without adequate consent, cross-border health data sharing without disclosure, retention beyond necessity, launched before OIPC review of submitted privacy impact assessments was completed. 31 findings, 20 recommendations.
- Hospital privacy breach involving an AI scribes tool Source principale
Otter.ai bot autonomously joined hospital hepatology rounds, recorded seven patients by name, emailed transcript to 65 people including former employee
- RapidAI for Stroke Detection: Health Technology Assessment Source principale
National HTA found no evidence meeting review criteria on patient outcomes for licensed Class III AI stroke detection device; expert panel could not recommend for or against implementation
-
Media coverage of Alberta OIPC investigation of TELUS Health Babylon
-
Health Canada exempts clinical decision support software from medical device classification; documents regulatory gap for health-adjacent AI
-
Physicians have limited guidance on evaluating or mitigating AI risks; comprehensive regulatory framework for AI remains a work in progress
-
Sector-wide guidance on AI scribes in healthcare following investigation
Détails de la fiche
Recommandations de politiqueévalué
Évaluations obligatoires des facteurs relatifs à la vie privée pour les plateformes de santé IA avant le lancement
Alberta OIPC (investigation recommendation) (1 juill. 2021)Directives sectorielles sur les scribes IA en santé
Ontario IPC (28 janv. 2026)Cadre réglementaire complet pour l'IA en santé au-delà de la classification des dispositifs médicaux
Canadian Medical Protective Association (1 janv. 2025)Évaluation éditoriale évalué
Les conclusions documentées couvrent trois catégories : lacunes en matière de preuves (un dispositif médical IA homologué pour lequel l'organisme national d'évaluation n'a trouvé aucune preuve de résultats répondant à ses critères), violations de la vie privée (une plateforme de soins virtuels partageant des données de santé à l'international sans divulgation), et action autonome de l'IA en milieu clinique (un outil d'IA enregistrant et diffusant des renseignements sur des patients sans initiation par un clinicien). Le cadre réglementaire de Santé Canada exempte les logiciels d'IA classés comme aide à la décision clinique de la surveillance des dispositifs médicaux, ce qui signifie que certains outils d'IA utilisés en milieu clinique ne sont pas soumis à l'évaluation de sécurité requise pour les dispositifs médicaux. La déclaration de l'ACPM selon laquelle les médecins manquent de directives sur l'évaluation des risques de l'IA indique que l'absence de directives s'étend au-delà de la législation aux normes de pratique clinique.
Fiches connexes
- AI Confabulation in Consequential Canadian Contextsrelated
- Agentic AI Deployment Outpacing Governance Frameworksrelated
- AI-Driven Cognitive Deskilling and Automation Over-Reliancerelated