AI Confabulation in Consequential Canadian Contexts
Confirmed incidents demonstrate that AI systems are deployed as authoritative information sources in consequential contexts — tax advice, consumer rights, court proceedings, health information — without accuracy verification, producing concrete harm from confabulated information. The CMA documents that Canadians who follow AI health advice are five times more likely to experience harms, at population scale. No regulatory framework requires accuracy verification before deployment.
Description
AI systems are being deployed as authoritative information sources across Canadian institutions and used by millions of Canadians — in tax administration, consumer services, legal proceedings, and health information — without accuracy verification before deployment and without monitoring after.
The Canada Revenue Agency spent $18 million on a chatbot (“Charlie”) that processed 18 million taxpayer queries. The Auditor General found it answered only 2 of 6 test questions correctly. Air Canada deployed a customer service chatbot that fabricated a bereavement fare discount policy; the BC Civil Resolution Tribunal held Air Canada liable for its chatbot’s representations. In Quebec, a court imposed the first judicial sanction for AI-hallucinated legal citations when a self-represented litigant submitted fabricated case law generated by a generative AI tool.
The Canadian Medical Association’s 2026 Health and Media Tracking Survey (conducted by Abacus Data with 5,000 Canadians in November 2025) documents that 52% of Canadians use AI search results for health information and 48% use them for treatment advice. Those who follow AI health advice are five times more likely to experience harms: confusion about health management (33%), mental stress or increased anxiety (31%), delay in seeking medical care (28%), lower trust in health professionals (27%), difficulty discussing health issues with healthcare providers (24%), strained personal relationships (23%), and avoidance of effective treatments due to misinformation (23%). Despite these outcomes, only 27% trust AI for health information — meaning a large proportion use tools they do not trust, likely driven by access barriers to professional health advice.
The consistent pattern: an institution, platform, or individual deploys AI as an authoritative source, treats its outputs as reliable, and discovers only after harm that the system confabulates. No regulatory framework requires accuracy verification before deploying AI systems in consequential information contexts. The Air Canada ruling established liability in one case but not a general standard. Health Canada’s regulatory scope for digital health products has not been extended to general-purpose AI tools widely used for health advice. This pattern scales directly with deployment — as more institutions and individuals adopt AI information systems, more consequential confabulation is inevitable without accuracy requirements.
Risk Pathway
AI systems are deployed as authoritative information sources in contexts where wrong information causes concrete harm — tax advice, consumer rights, legal proceedings, health information — without accuracy verification before deployment. The pattern is consistent: an institution or platform deploys an AI system to reduce costs or increase throughput, treats its outputs as reliable, and discovers only after harm occurs that the system produces false information. No general accuracy verification requirement exists for AI systems deployed as public-facing information sources. The Air Canada ruling established that organizations are liable for their chatbots' representations, but this is a judicial precedent in one case, not a regulatory framework. The CRA spent $18 million on a chatbot that answered only 2 of 6 test questions correctly while processing 18 million queries. Half of Canadians use AI tools for health information, and the CMA documents that those who follow AI health advice are five times more likely to experience harms — including delayed care, treatment avoidance, and increased anxiety. The pattern scales directly with deployment — more institutions and individuals deploying AI as authoritative sources means more consequential confabulation.
Assessment History
Three confirmed incidents across public services (CRA chatbot), commerce (Air Canada chatbot), and justice (AI-generated fake jurisprudence in Quebec court). The Auditor General documented the CRA failure. The BC Civil Resolution Tribunal established the Air Canada precedent. A Quebec court imposed the first sanction for AI-hallucinated legal citations. The CMA's 2026 Health and Media Tracking Survey (n=5,000) documents that 52% of Canadians use AI for health information, with those who follow AI health advice 5x more likely to experience harms including delayed care (28%), treatment avoidance (23%), increased anxiety (31%), and undermined trust in health professionals (27%). All demonstrate the same pattern: AI deployed as authoritative source without accuracy verification. The hazard is escalating because AI deployments are accelerating across Canadian institutions and health information use is growing while no accuracy verification framework exists.
Initial assessment. Status set to escalating based on accelerating deployment of AI information systems without accuracy requirements. Updated to include CMA health misinformation evidence.
Triggers
- Accelerating deployment of AI chatbots by Canadian institutions
- Increasing use of generative AI for professional tasks (legal, medical, financial)
- Cost pressure driving adoption of AI as replacement for human information services
- Growing public trust in AI-generated information
- Rising adoption of AI for health information (52% and growing)
- Healthcare access barriers driving Canadians to AI as substitute for professional consultation
- AI systems becoming more conversational and authoritative in tone
Mitigating Factors
- Air Canada tribunal ruling establishing organizational liability for chatbot outputs
- Quebec court sanction creating precedent against AI-hallucinated legal content
- Auditor General scrutiny of CRA chatbot accuracy
- Professional associations beginning to address AI use standards
- CMA public awareness campaign drawing attention to AI health misinformation
- Health Canada's existing authority over digital health products (could be extended to AI health tools)
- Provincial telehealth services providing free alternative to AI health advice
Risk Controls
- Accuracy verification requirements before deploying AI systems as authoritative information sources in consumer, public service, legal, and health contexts
- Clear liability framework for AI-generated misinformation extending the Air Canada precedent into regulation
- Mandatory disclosure that information is AI-generated in contexts with financial, legal, or health consequences
- Professional responsibility standards for AI use in regulated contexts (legal, medical, financial advice)
- Testing and monitoring requirements proportional to the consequence of errors
- Require AI tools providing health information to carry clear disclaimers and actively refer users to qualified health professionals
- Establish accuracy standards for AI systems widely used for health information in Canada, with mandatory testing against Canadian clinical guidelines
Materialized Incidents
- Google AI Overview Falsely Accused Canadian Musician Ashley MacIsaac of Sex Offenses, Leading to Concert Cancellation
- Deloitte's $1.6M Newfoundland Health Workforce Report Contained AI-Fabricated Research Citations
Affected Populations
- Canadian taxpayers receiving incorrect tax advice from CRA chatbot
- Air travel consumers relying on chatbot fare information
- Self-represented litigants using AI for legal research
- Canadians using AI tools for health information (52% of population)
- Patients delaying or avoiding medical care based on AI advice
- Elderly and digitally less-literate populations relying on AI health information
- Rural and underserved communities with limited healthcare access using AI as substitute
- General public relying on AI-generated information for consequential decisions
Entities Involved
Deployed $18M chatbot that answered only 2 of 6 test questions correctly while processing 18 million queries
Deployed customer service chatbot that provided false bereavement fare information; held liable by tribunal
AI Systems Involved
CRA's "Charlie" chatbot processing 18 million queries with documented accuracy failures
Customer service chatbot that fabricated bereavement fare discount policy
Responses
Auditor General report documented chatbot accuracy failures; CRA committed to improvements
BC Civil Resolution Tribunal held Air Canada liable for chatbot's inaccurate fare representations
Related Records
- google-ai-overview-macisaac-defamation related
- AI in Government Automated Decision-Making Without Transparency related
Taxonomy
Sources
- Report 3 — Processing of Benefit and Credit Applications — Canada Revenue Agency
- Moffatt v. Air Canada, 2024 BCCRT 149
- Doctors warn: Canadians are turning to AI for health information and it is hurting them
- Canadians Who Turn to AI for Health Information Risk Harm
- Using AI for medical advice can cause you harm, Canadian doctors warn
- About half of Canadians are turning to AI for health information, survey says
- Experts divided as more people turning to AI for health advice
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 9, 2026 | Absorbed ai-health-misinformation-canadians hazard — added CMA 2026 survey evidence (52% AI health usage, 5x harm multiplier), health-specific sources, affected populations, governance dependencies, and health domain |