Confabulation de l'IA dans des contextes canadiens à forts enjeux
Des systèmes d'IA présentent des informations fabriquées comme des faits dans les conseils fiscaux, les procédures judiciaires et les requêtes de santé — les Canadiens suivant les conseils santé de l'IA sont cinq fois plus susceptibles de subir un préjudice.
AI systems are being deployed as authoritative information sources across Canadian institutions and used by millions of Canadians — in tax administration, consumer services, legal proceedings, and health information — without accuracy verification before deployment and without monitoring after.
The Canada Revenue Agency spent $18 million on a chatbot ("Charlie") that processed 18 million taxpayer queries. The Auditor General found it answered only 2 of 6 test questions correctly. Air Canada deployed a customer service chatbot that fabricated a bereavement fare discount policy; the BC Civil Resolution Tribunal held Air Canada liable for its chatbot's representations. In Quebec, a court imposed the first judicial sanction for AI-hallucinated legal citations when a self-represented litigant submitted fabricated case law generated by a generative AI tool.
The Canadian Medical Association's 2026 Health and Media Tracking Survey (conducted by Abacus Data with 5,000 Canadians in November 2025) documents that 52% of Canadians use AI search results for health information and 48% use them for treatment advice. Those who follow AI health advice are five times more likely to experience harms: confusion about health management (33%), mental stress or increased anxiety (31%), delay in seeking medical care (28%), lower trust in health professionals (27%), difficulty discussing health issues with healthcare providers (24%), strained personal relationships (23%), and avoidance of effective treatments due to misinformation (23%). Despite these outcomes, only 27% trust AI for health information — meaning a large proportion use tools they do not trust, likely driven by access barriers to professional health advice.
The consistent pattern: an institution, platform, or individual deploys AI as an authoritative source, treats its outputs as reliable, and discovers only after harm that the system confabulates. This pattern scales with deployment — as more institutions and individuals adopt AI information systems, the frequency of consequential confabulation increases proportionally.
Some institutions have taken corrective action following documented incidents. The CRA updated its chatbot after the Auditor General's report. Air Canada revised its customer service AI policies after the tribunal ruling. Several AI developers have implemented accuracy improvements and added citations to their outputs. The trajectory of these responses suggests institutional learning, though the pace of correction varies significantly across sectors.
Incidents matérialisés
- Air Canada Held Liable for Chatbot's Inaccurate Bereavement Fare Information
- Auditor General Found CRA's $18-Million AI Chatbot Gave Incorrect Tax Answers
- Deloitte's $1.6M Newfoundland Health Workforce Report Contained AI-Generated False Research Citations
- Google AI Overview Falsely Accused Canadian Musician Ashley MacIsaac of Sex Offenses, Leading to Concert Cancellation
- AI-Fabricated Legal Citations Sanctioned Across Canadian Courts
Préjudices
Le robot conversationnel « Charlie » de l'ARC a traité 18 millions de requêtes de contribuables tout en ne répondant correctement qu'à 2 des 6 questions test, selon le vérificateur général. Les contribuables ont reçu des informations inexactes sur leurs obligations fiscales d'un système présenté comme source gouvernementale faisant autorité.
Le robot conversationnel d'Air Canada a inventé une politique de tarif de deuil, amenant un passager à réserver au plein tarif sur la base d'informations fausses. Le Tribunal de résolution civile de la C.-B. a tenu Air Canada responsable des déclarations inexactes du robot.
Un tribunal québécois a imposé la première sanction judiciaire (5 000 $) pour des citations juridiques hallucinées par l'IA lorsqu'un justiciable non représenté a soumis de la jurisprudence fabriquée par un outil d'IA générative, portant atteinte à l'intégrité des procédures judiciaires.
Un sondage de l'AMC auprès de 5 000 Canadiens documente que 52 % utilisent l'IA pour l'information sur la santé et que ceux qui suivent les conseils de santé de l'IA sont cinq fois plus susceptibles de subir des préjudices, incluant un retard dans les soins médicaux (28 %), une anxiété accrue (31 %) et l'évitement de traitements efficaces (24 %).
Preuves
7 rapports
- Moffatt v. Air Canada, 2024 BCCRT 149 Source principale
Air Canada held liable for chatbot's inaccurate bereavement fare information
- Report 2 — Contact Centres — Canada Revenue Agency Source principale
CRA chatbot answered only 2 of 6 test questions correctly
- Doctors warn: Canadians are turning to AI for health information and it is hurting them Source principale
Canadians who followed health advice from AI were 5x more likely to experience harms; 52% use AI for health info; specific harm types quantified
-
Media coverage of CMA survey: Canadians who follow AI health advice are at greater risk of harm; corroborates 5x harm multiplier finding
-
Global News coverage of CMA survey: AI medical advice can cause harm; 52% of Canadians using AI for health info
-
CP24 coverage: experts divided on AI health advice; context on growing reliance and associated risks
-
Globe and Mail coverage: about half of Canadians turning to AI for health information; details of Abacus Data survey methodology
Détails de la fiche
Réponses et résultats
BC Civil Resolution Tribunal held Air Canada liable for chatbot's inaccurate fare representations
Auditor General report documented chatbot accuracy failures; CRA committed to improvements
Recommandations de politiqueévalué
Accuracy verification requirements before deploying AI systems as authoritative information sources in public service contexts
Office of the Auditor General of Canada (19 mars 2024)Clear liability framework for AI-generated misinformation extending the Air Canada precedent into regulation
British Columbia Civil Resolution Tribunal (14 févr. 2024)Require AI tools providing health information to carry clear disclaimers and actively refer users to qualified health professionals
Canadian Medical Association (10 févr. 2026)Establish accuracy standards for AI systems widely used for health information in Canada, with mandatory testing against Canadian clinical guidelines
Canadian Medical Association (10 févr. 2026)Évaluation éditoriale évalué
Des incidents confirmés démontrent que des systèmes d'IA sont déployés comme sources d'information faisant autorité dans des contextes à forts enjeux — conseils fiscaux, droits des consommateurs, procédures judiciaires, information sur la santé — sans vérification de l'exactitude. L'AMC documente que les Canadiens qui suivent les conseils de santé de l'IA sont cinq fois plus susceptibles de subir des préjudices, à l'échelle de la population. En date de 2026, la loi canadienne n'exige pas de vérification de l'exactitude avant le déploiement de systèmes d'IA dans ces contextes.
Entités impliquées
Systèmes d'IA impliqués
Robot conversationnel du service à la clientèle qui a inventé une politique de rabais pour tarifs de deuil
Le robot conversationnel « Charlie » de l'ARC traitant 18 millions de requêtes avec des défaillances documentées d'exactitude
Fiches connexes
- Google AI Overview Falsely Accused Canadian Musician Ashley MacIsaac of Sex Offenses, Leading to Concert Cancellationrelated
- AI in Canadian Government Automated Decision-Makingrelated
- AI Governance Gap in Canadarelated
- Clinical AI Systems in Canada: Deployed with Documented Evidence Gaps and Privacy Violationsrelated
- AI-Driven Cognitive Deskilling and Automation Over-Reliancerelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |
| v2 | 9 mars 2026 | Absorbed ai-health-misinformation-canadians hazard — added CMA 2026 survey evidence (52% AI health usage, 5x harm multiplier), health-specific sources, affected populations, governance dependencies, and health domain |