La vérificatrice générale a conclu que le chatbot IA de l'ARC, au coût de 18 millions de dollars, fournissait des renseignements fiscaux erronés
The federal tax authority spent $18 million on an AI chatbot that the Auditor General found gave incorrect answers to basic tax questions. The chatbot processed over 18 million queries, raising concerns about the accuracy of tax information provided to Canadians through the system.
Récit
The Canada Revenue Agency launched Charlie, an AI-powered chatbot, in February 2020 to answer taxpayer questions about tax filing, benefits, and CRA services. By the time of the Auditor General’s review in December 2025, the system had processed over 18 million questions at a cost of approximately $18 million.
The Auditor General’s December 2025 report found significant accuracy problems. When tested with common taxpayer questions, the chatbot answered only two out of six questions correctly. The incorrect responses included wrong information about tax obligations, filing requirements, and CRA procedures — precisely the types of questions taxpayers rely on the chatbot to answer accurately.
With 18 million queries processed, the error rate identified by the Auditor General raises concerns about the accuracy of tax information provided to Canadians through the system. Taxpayers who relied on Charlie’s responses may have received incorrect information about filing requirements, deadlines, or eligibility for benefits. The CRA chatbot carries the implicit authority of the federal tax agency, and users have no straightforward way to know when the chatbot’s answer is wrong.
The CRA disputed the Auditor General’s characterization, noting that Charlie had met a 70% accuracy threshold in internal testing and that the six-question test was not representative of the chatbot’s overall performance across millions of queries. The Secretary of State for CRA pushed back publicly against the findings. In November 2025, the CRA upgraded Charlie to a generative AI version and reported pre-release testing showed approximately 90% accuracy, though the agency acknowledged it could not confirm real-world accuracy.
Préjudices
When tested by the Auditor General with six common taxpayer questions, CRA's chatbot Charlie answered only two correctly, providing wrong information about tax obligations, filing requirements, and CRA procedures. The chatbot had processed an estimated 18 million queries over its lifetime.
CRA spent $18 million on an AI chatbot without adequate accuracy testing or ongoing quality monitoring, deploying it as an official government information source that carried the implicit authority of the federal tax agency.
Populations touchées
- Canadian taxpayers
- tax professionals
Entités impliquées
Deployed the Charlie AI chatbot in February 2020 to answer taxpayer questions; spent $18 million on the system which the Auditor General found gave incorrect answers to basic tax questions
Systèmes d'IA impliqués
AI-powered chatbot 'Charlie' deployed to answer taxpayer questions about tax filing, benefits, and CRA services; processed over 18 million questions and was found by the Auditor General to answer only two out of six test questions correctly
Réponses et résultats
Upgraded Charlie to a generative AI version following the Auditor General's findings; reported pre-release testing showed approximately 90% accuracy but acknowledged it could not confirm real-world accuracy
Contexte du système d'IA
Charlie, an AI-powered chatbot deployed by the Canada Revenue Agency to answer taxpayer questions about tax filing, benefits, and CRA services. The system processed over 18 million questions between its 2020 launch and the Auditor General's 2025 review.
Mesures préventives
- Require rigorous accuracy testing of government AI chatbots before deployment, including domain-expert validation of responses to common queries
- Implement ongoing quality monitoring that tests AI chatbot accuracy against verified answers on a regular schedule, with public reporting of accuracy rates
- Mandate that government AI chatbots clearly disclose limitations and direct users to authoritative sources for consequential information such as tax obligations
- Establish accountability mechanisms for AI system procurement, including performance benchmarks that must be met before continued investment
Fiches connexes
Taxonomie
Sources
- In scathing report, AG finds CRA call centres are slow to answer and often inaccurate
- Auditor General Slams Ottawa's $18 Million CRA Chatbot 'Charlie'
- The CRA spent $18M on 'Charlie,' a new tax information chatbot that is wrong most of the time
- CRA must fix human responses before pursuing AI, experts say
AIID : Incident #1310
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |