Phase pilote
En escalade Confiance : high Sévérité potentielle : Important Version 2

Des incidents confirmés démontrent que des systèmes d'IA sont déployés comme sources d'information faisant autorité dans des contextes à forts enjeux — conseils fiscaux, droits des consommateurs, procédures judiciaires, information sur la santé — sans vérification de l'exactitude. L'AMC documente que les Canadiens qui suivent les conseils de santé de l'IA sont cinq fois plus susceptibles de subir des préjudices, à l'échelle de la population.

Identifié : 1 septembre 2022 Dernière évaluation : 8 mars 2026

Description

AI systems are being deployed as authoritative information sources across Canadian institutions and used by millions of Canadians — in tax administration, consumer services, legal proceedings, and health information — without accuracy verification before deployment and without monitoring after.

The Canada Revenue Agency spent $18 million on a chatbot (“Charlie”) that processed 18 million taxpayer queries. The Auditor General found it answered only 2 of 6 test questions correctly. Air Canada deployed a customer service chatbot that fabricated a bereavement fare discount policy; the BC Civil Resolution Tribunal held Air Canada liable for its chatbot’s representations. In Quebec, a court imposed the first judicial sanction for AI-hallucinated legal citations when a self-represented litigant submitted fabricated case law generated by a generative AI tool.

The Canadian Medical Association’s 2026 Health and Media Tracking Survey (conducted by Abacus Data with 5,000 Canadians in November 2025) documents that 52% of Canadians use AI search results for health information and 48% use them for treatment advice. Those who follow AI health advice are five times more likely to experience harms: confusion about health management (33%), mental stress or increased anxiety (31%), delay in seeking medical care (28%), lower trust in health professionals (27%), difficulty discussing health issues with healthcare providers (24%), strained personal relationships (23%), and avoidance of effective treatments due to misinformation (23%). Despite these outcomes, only 27% trust AI for health information — meaning a large proportion use tools they do not trust, likely driven by access barriers to professional health advice.

The consistent pattern: an institution, platform, or individual deploys AI as an authoritative source, treats its outputs as reliable, and discovers only after harm that the system confabulates. No regulatory framework requires accuracy verification before deploying AI systems in consequential information contexts. The Air Canada ruling established liability in one case but not a general standard. Health Canada’s regulatory scope for digital health products has not been extended to general-purpose AI tools widely used for health advice. This pattern scales directly with deployment — as more institutions and individuals adopt AI information systems, more consequential confabulation is inevitable without accuracy requirements.

Voie de risque

Des systèmes d'IA sont déployés comme sources d'information faisant autorité dans des contextes où des informations erronées causent un préjudice concret — conseils fiscaux, droits des consommateurs, procédures judiciaires, information sur la santé — sans vérification de l'exactitude avant le déploiement. La moitié des Canadiens utilisent des outils d'IA pour la santé, et l'AMC documente que ceux qui suivent les conseils de santé de l'IA sont cinq fois plus susceptibles de subir des préjudices. Aucune exigence générale de vérification de l'exactitude n'existe pour les systèmes d'IA déployés comme sources d'information destinées au public.

Historique des évaluations

En escalade Confiance : high Important

Trois incidents confirmés dans les services publics, le commerce et la justice. Le sondage de l'AMC (n=5 000, nov. 2025) documente que 52 % des Canadiens utilisent l'IA pour la santé, ceux qui suivent les conseils étant 5 fois plus susceptibles de subir des préjudices, incluant des retards de soins (28 %), l'évitement de traitements (23 %), une anxiété accrue (31 %) et une confiance réduite envers les professionnels de la santé (27 %).

Initial assessment. Status set to escalating based on accelerating deployment of AI information systems without accuracy requirements. Updated to include CMA health misinformation evidence.

Déclencheurs

  • Accelerating deployment of AI chatbots by Canadian institutions
  • Increasing use of generative AI for professional tasks (legal, medical, financial)
  • Cost pressure driving adoption of AI as replacement for human information services
  • Growing public trust in AI-generated information
  • Rising adoption of AI for health information (52% and growing)
  • Healthcare access barriers driving Canadians to AI as substitute for professional consultation
  • AI systems becoming more conversational and authoritative in tone

Facteurs atténuants

  • Air Canada tribunal ruling establishing organizational liability for chatbot outputs
  • Quebec court sanction creating precedent against AI-hallucinated legal content
  • Auditor General scrutiny of CRA chatbot accuracy
  • Professional associations beginning to address AI use standards
  • CMA public awareness campaign drawing attention to AI health misinformation
  • Health Canada's existing authority over digital health products (could be extended to AI health tools)
  • Provincial telehealth services providing free alternative to AI health advice

Contrôles de risque

  • Accuracy verification requirements before deploying AI systems as authoritative information sources in consumer, public service, legal, and health contexts
  • Clear liability framework for AI-generated misinformation extending the Air Canada precedent into regulation
  • Mandatory disclosure that information is AI-generated in contexts with financial, legal, or health consequences
  • Professional responsibility standards for AI use in regulated contexts (legal, medical, financial advice)
  • Testing and monitoring requirements proportional to the consequence of errors
  • Require AI tools providing health information to carry clear disclaimers and actively refer users to qualified health professionals
  • Establish accuracy standards for AI systems widely used for health information in Canada, with mandatory testing against Canadian clinical guidelines

Incidents matérialisés

Populations touchées

  • Canadian taxpayers receiving incorrect tax advice from CRA chatbot
  • Air travel consumers relying on chatbot fare information
  • Self-represented litigants using AI for legal research
  • Canadians using AI tools for health information (52% of population)
  • Patients delaying or avoiding medical care based on AI advice
  • Elderly and digitally less-literate populations relying on AI health information
  • Rural and underserved communities with limited healthcare access using AI as substitute
  • General public relying on AI-generated information for consequential decisions

Entités impliquées

A déployé un robot conversationnel de 18 M$ qui n'a répondu correctement qu'à 2 des 6 questions de test tout en traitant 18 millions de requêtes

Air Canada
deployer

A déployé un robot conversationnel de service à la clientèle qui a fourni de fausses informations sur les tarifs de deuil; tenu responsable par le tribunal

Systèmes d'IA impliqués

CRA AI Chatbot

Le robot conversationnel « Charlie » de l'ARC traitant 18 millions de requêtes avec des défaillances documentées d'exactitude

Air Canada Customer Service Chatbot

Robot conversationnel du service à la clientèle qui a inventé une politique de rabais pour tarifs de deuil

Réponses

Agence du revenu du Canada

Auditor General report documented chatbot accuracy failures; CRA committed to improvements

Air Canada

BC Civil Resolution Tribunal held Air Canada liable for chatbot's inaccurate fare representations

Fiches connexes

Taxonomie

Domaine
Services publicsCommerceJusticeSanté
Type de préjudice
DésinformationPréjudice économiqueDéfaillance opérationnelleDéfaillance de sécuritéPréjudice psychologique
Implication de l'IA
Confabulation du modèleDéfaillance de déploiementLacune de surveillance
Phase du cycle de vie
DéploiementSurveillanceÉvaluation

Sources

  1. Report 3 — Processing of Benefit and Credit Applications — Canada Revenue Agency Réglementaire — Office of the Auditor General of Canada (19 mars 2024)
  2. Moffatt v. Air Canada, 2024 BCCRT 149 Judiciaire — British Columbia Civil Resolution Tribunal (14 févr. 2024)
  3. Doctors warn: Canadians are turning to AI for health information and it is hurting them Officiel — Canadian Medical Association (10 févr. 2026)
  4. Canadians Who Turn to AI for Health Information Risk Harm Média — Medscape (11 févr. 2026)
  5. Using AI for medical advice can cause you harm, Canadian doctors warn Média — Global News (11 févr. 2026)
  6. About half of Canadians are turning to AI for health information, survey says Média — Globe and Mail (4 mars 2026)
  7. Experts divided as more people turning to AI for health advice Média — CP24 (11 févr. 2026)

Historique des modifications

VersionDateModification
v1 8 mars 2026 Initial publication
v2 9 mars 2026 Absorbed ai-health-misinformation-canadians hazard — added CMA 2026 survey evidence (52% AI health usage, 5x harm multiplier), health-specific sources, affected populations, governance dependencies, and health domain