Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Significant Confidence: high

AI systems generate false information in tax advice, court proceedings, and health queries — Canadians following AI health advice are five times more likely to experience harm.

Identified: September 1, 2022 Last assessed: March 8, 2026

AI systems are being deployed as authoritative information sources across Canadian institutions and used by millions of Canadians — in tax administration, consumer services, legal proceedings, and health information — without accuracy verification before deployment and without monitoring after.

The Canada Revenue Agency spent $18 million on a chatbot ("Charlie") that processed 18 million taxpayer queries. The Auditor General found it answered only 2 of 6 test questions correctly. Air Canada deployed a customer service chatbot that fabricated a bereavement fare discount policy; the BC Civil Resolution Tribunal held Air Canada liable for its chatbot's representations. In Quebec, a court imposed the first judicial sanction for AI-hallucinated legal citations when a self-represented litigant submitted fabricated case law generated by a generative AI tool.

The Canadian Medical Association's 2026 Health and Media Tracking Survey (conducted by Abacus Data with 5,000 Canadians in November 2025) documents that 52% of Canadians use AI search results for health information and 48% use them for treatment advice. Those who follow AI health advice are five times more likely to experience harms: confusion about health management (33%), mental stress or increased anxiety (31%), delay in seeking medical care (28%), lower trust in health professionals (27%), difficulty discussing health issues with healthcare providers (24%), strained personal relationships (23%), and avoidance of effective treatments due to misinformation (23%). Despite these outcomes, only 27% trust AI for health information — meaning a large proportion use tools they do not trust, likely driven by access barriers to professional health advice.

The consistent pattern: an institution, platform, or individual deploys AI as an authoritative source, treats its outputs as reliable, and discovers only after harm that the system confabulates. This pattern scales with deployment — as more institutions and individuals adopt AI information systems, the frequency of consequential confabulation increases proportionally.

Some institutions have taken corrective action following documented incidents. The CRA updated its chatbot after the Auditor General's report. Air Canada revised its customer service AI policies after the tribunal ruling. Several AI developers have implemented accuracy improvements and added citations to their outputs. The trajectory of these responses suggests institutional learning, though the pace of correction varies significantly across sectors.

Materialized Incidents

Harms

CRA chatbot 'Charlie' processed 18 million taxpayer queries while answering only 2 of 6 test questions correctly, according to the Auditor General. Taxpayers received inaccurate information on tax obligations from a system presented as an authoritative government source.

Fraud & ImpersonationService DisruptionSignificantPopulation

Air Canada's chatbot fabricated a bereavement fare discount policy, leading a passenger to book at full price based on false information. The BC Civil Resolution Tribunal held Air Canada liable for the chatbot's inaccurate representations.

Fraud & ImpersonationEconomic HarmMinorIndividual

A Quebec court imposed the first judicial sanction ($5,000) for AI-hallucinated legal citations when a self-represented litigant submitted fabricated case law generated by a generative AI tool, undermining the integrity of legal proceedings.

MisinformationModerateIndividual

CMA survey of 5,000 Canadians documents that 52% use AI for health information and those who follow AI health advice are five times more likely to experience harms including delayed medical care (28%), increased anxiety (31%), and avoidance of effective treatments (24%).

Safety IncidentPsychological HarmSignificantPopulation

Evidence

7 reports

  1. Court — British Columbia Civil Resolution Tribunal (Feb 14, 2024)

    Air Canada held liable for chatbot's inaccurate bereavement fare information

  2. Regulatory — Office of the Auditor General of Canada (Oct 21, 2025)

    CRA chatbot answered only 2 of 6 test questions correctly

  3. Official — Canadian Medical Association (Feb 10, 2026)

    Canadians who followed health advice from AI were 5x more likely to experience harms; 52% use AI for health info; specific harm types quantified

  4. Media — Medscape (Feb 11, 2026)

    Media coverage of CMA survey: Canadians who follow AI health advice are at greater risk of harm; corroborates 5x harm multiplier finding

  5. Media — Global News (Feb 11, 2026)

    Global News coverage of CMA survey: AI medical advice can cause harm; 52% of Canadians using AI for health info

  6. Media — CP24 (Feb 11, 2026)

    CP24 coverage: experts divided on AI health advice; context on growing reliance and associated risks

  7. Media — Globe and Mail (Mar 4, 2026)

    Globe and Mail coverage: about half of Canadians turning to AI for health information; details of Abacus Data survey methodology

Record details

Responses & Outcomes

Air Canadainstitutional actionActive

BC Civil Resolution Tribunal held Air Canada liable for chatbot's inaccurate fare representations

Canada Revenue Agencyinstitutional actionActive

Auditor General report documented chatbot accuracy failures; CRA committed to improvements

Policy Recommendationsassessed

Accuracy verification requirements before deploying AI systems as authoritative information sources in public service contexts

Office of the Auditor General of Canada (Mar 19, 2024)

Clear liability framework for AI-generated misinformation extending the Air Canada precedent into regulation

British Columbia Civil Resolution Tribunal (Feb 14, 2024)

Require AI tools providing health information to carry clear disclaimers and actively refer users to qualified health professionals

Canadian Medical Association (Feb 10, 2026)

Establish accuracy standards for AI systems widely used for health information in Canada, with mandatory testing against Canadian clinical guidelines

Canadian Medical Association (Feb 10, 2026)

Editorial Assessment assessed

Documented incidents show AI systems deployed as authoritative information sources in consequential contexts — tax advice, consumer rights, court proceedings, health information — producing concrete harm from confabulated information. The CMA documents that Canadians who follow AI health advice are five times more likely to experience harms. Some institutions have taken corrective action after incidents (CRA updated its chatbot; Air Canada revised policies). As of 2026, no Canadian law requires accuracy verification before deploying AI systems in these contexts.

Entities Involved

AI Systems Involved

Air Canada Customer Service Chatbot

Customer service chatbot that fabricated bereavement fare discount policy

CRA AI Chatbot

CRA's "Charlie" chatbot processing 18 million queries with documented accuracy failures

Related Records

Taxonomyassessed

Domain
Public ServicesRetail & CommerceJusticeHealthcare
Harm type
MisinformationEconomic HarmService DisruptionSafety IncidentPsychological Harm
AI pathway
ConfabulationDeployment ContextMonitoring Absent
Lifecycle phase
DeploymentMonitoringEvaluation

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 9, 2026Absorbed ai-health-misinformation-canadians hazard — added CMA 2026 survey evidence (52% AI health usage, 5x harm multiplier), health-specific sources, affected populations, governance dependencies, and health domain

Version 2