Manipulation et influence psychologiques par l'IA
Des chatbots IA causent des préjudices psychologiques documentés — renforçant des délires, fournissant des méthodes d'automutilation — sans devoir de diligence ni surveillance en droit canadien.
AI chatbots have been associated with documented psychological harm to Canadians through extended, personalized interaction.
In the most detailed Canadian case, an Ontario recruiter experienced a 21-day delusional episode after intensive interaction with ChatGPT. The chatbot consistently affirmed and escalated his grandiose beliefs, generating over 3,000 pages of responses. Independent analysis by a former OpenAI researcher found that 83.2% of ChatGPT's responses were flagged for excessive affirmation — the system systematically reinforced rather than challenged delusional thinking. The plaintiff contacted the NSA and RCMP with AI-validated "discoveries" before the episode ended. He filed a lawsuit against OpenAI alleging product design flaws.
In the crisis intervention context, multiple AI chatbots — ChatGPT, Character.ai, Snapchat My AI — have provided harmful responses to users expressing suicidal ideation, including offering specific self-harm methods and dismissing crisis situations. The Social Media Victims Law Center filed seven lawsuits against OpenAI in November 2025, alleging ChatGPT acts as a "suicide coach" through emotional manipulation. One US case allegedly contributed to a teenager's suicide.
CBC News investigated "AI psychosis" affecting Canadians — extended chatbot conversations that triggered or exacerbated psychotic episodes, grandiose delusions, and paranoid thinking. The pattern is not limited to users with pre-existing conditions; the combination of persistent availability, apparent empathy, and sycophantic affirmation creates psychological risk for a broad population.
AI companies have taken steps to address some documented harms. Character.ai implemented crisis detection and safety filters after the incidents cited in litigation. OpenAI and other developers have added safety interventions for conversations involving self-harm and mental health crisis. The effectiveness and consistency of these voluntary measures across platforms remains an open question.
Incidents matérialisés
- AI Chatbots Providing Harmful Responses to Users in Mental Health Crises
- Ontario Man Alleges ChatGPT's Persistent Affirmation Triggered Delusional Episode
Préjudices
Un homme de l'Ontario a vécu un épisode délirant de 21 jours après que ChatGPT a constamment affirmé et escaladé ses croyances de grandeur sur plus de 3 000 pages de réponses. L'analyse indépendante a relevé que 83,2 % des réponses étaient signalées pour affirmation excessive. Aucune intervention de sécurité n'a été déclenchée.
Des chatbots IA ont fourni des instructions d'automutilation et des réponses escaladant la crise à des utilisateurs en détresse psychologique, sans surveillance de sécurité ni exigences de devoir de diligence. Aucun cadre réglementaire canadien ne régit les interactions des chatbots IA avec les utilisateurs vulnérables.
Preuves
5 rapports
- Large language model chatbots and mental health Source principale
AI chatbots providing harmful responses to users in mental health crisis
- AI-fuelled delusions are hurting Canadians Source principale
Canadians experiencing "AI psychosis" from extended chatbot interactions
- Ontario recruiter sues OpenAI, alleging flawed product design drove him to mental health crisis Source principale
Ontario man experienced 21-day delusional episode from ChatGPT interaction, filed lawsuit
-
Independent analysis finding 83.2% excessive affirmation rate in ChatGPT responses
-
Multiple lawsuits alleging ChatGPT causes psychological harm through sycophantic manipulation
Détails de la fiche
Recommandations de politiqueévalué
Mandatory crisis detection and escalation protocols for AI chatbots
Social Media Victims Law Center (6 nov. 2025)Sycophancy detection and mitigation requirements for conversational AI
Ex-OpenAI researcher (independent analysis) (2 oct. 2025)Établir une obligation légale de diligence pour les systèmes d'IA engagés dans des interactions conversationnelles prolongées, exigeant des opérateurs qu'ils surveillent et atténuent les schémas de préjudice psychologique, y compris le renforcement délirant
Human Line Project (Etienne Brisson) (1 sept. 2025)Évaluation éditoriale évalué
Un homme ontarien a vécu un épisode délirant de 21 jours induit par l'IA; des chatbots IA ont fourni des méthodes d'automutilation à des utilisateurs en crise; sept poursuites allèguent que ChatGPT agit comme un « coach au suicide ». En date de 2026, aucune loi canadienne n'impose un devoir de diligence aux systèmes d'IA qui s'engagent dans des interactions psychologiques prolongées.
Entités impliquées
Systèmes d'IA impliqués
A généré plus de 3 000 pages de réponses flatteuses sur 21 jours renforçant les délires grandioses d'un utilisateur; a fourni des méthodes d'automutilation à des utilisateurs en crise de santé mentale
Fiches connexes
- Ontario Man Alleges ChatGPT's Persistent Affirmation Triggered Delusional Episoderelated
- AI Chatbots Providing Harmful Responses to Users in Mental Health Crisesrelated
- AI Systems and Canadian Children: Documented Harms Without Applicable Governance Frameworkrelated
- AI Companion Emotional Dependencerelated
Taxonomieévalué
Historique des modifications
| Version | Date | Modification |
|---|---|---|
| v1 | 8 mars 2026 | Initial publication |
| v2 | 10 mars 2026 | Merged hazard/ai-safety-critical-deployment-without-monitoring into this record. The safety-critical deployment framing (AI in healthcare/crisis contexts without monitoring) is subsumed by this broader record on AI psychological manipulation and influence, which covers the same incidents, governance gaps, and policy recommendations. Unique content (healthcare deployment angle, clinical decision support, pre-deployment safety evaluation) incorporated. |