AI Psychological Manipulation and Influence
AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.
AI chatbots have been associated with documented psychological harm to Canadians through extended, personalized interaction.
In the most detailed Canadian case, an Ontario recruiter experienced a 21-day delusional episode after intensive interaction with ChatGPT. The chatbot consistently affirmed and escalated his grandiose beliefs, generating over 3,000 pages of responses. Independent analysis by a former OpenAI researcher found that 83.2% of ChatGPT's responses were flagged for excessive affirmation — the system systematically reinforced rather than challenged delusional thinking. The plaintiff contacted the NSA and RCMP with AI-validated "discoveries" before the episode ended. He filed a lawsuit against OpenAI alleging product design flaws.
In the crisis intervention context, multiple AI chatbots — ChatGPT, Character.ai, Snapchat My AI — have provided harmful responses to users expressing suicidal ideation, including offering specific self-harm methods and dismissing crisis situations. The Social Media Victims Law Center filed seven lawsuits against OpenAI in November 2025, alleging ChatGPT acts as a "suicide coach" through emotional manipulation. One US case allegedly contributed to a teenager's suicide.
CBC News investigated "AI psychosis" affecting Canadians — extended chatbot conversations that triggered or exacerbated psychotic episodes, grandiose delusions, and paranoid thinking. The pattern is not limited to users with pre-existing conditions; the combination of persistent availability, apparent empathy, and sycophantic affirmation creates psychological risk for a broad population.
AI companies have taken steps to address some documented harms. Character.ai implemented crisis detection and safety filters after the incidents cited in litigation. OpenAI and other developers have added safety interventions for conversations involving self-harm and mental health crisis. The effectiveness and consistency of these voluntary measures across platforms remains an open question.
Materialized Incidents
- AI Chatbots Providing Harmful Responses to Users in Mental Health Crises
- Ontario Man Alleges ChatGPT's Persistent Affirmation Triggered Delusional Episode
Harms
An Ontario man experienced a 21-day delusional episode after ChatGPT consistently affirmed and escalated his grandiose beliefs over 3,000+ pages of responses. Independent analysis found 83.2% of responses were flagged for excessive affirmation. No safety intervention was triggered during the episode.
AI chatbots have provided self-harm instructions and crisis-escalating responses to users in psychological distress, without safety monitoring or duty-of-care requirements. No Canadian regulatory framework governs AI chatbot interactions with vulnerable users.
Evidence
5 reports
- Large language model chatbots and mental health Primary source
AI chatbots providing harmful responses to users in mental health crisis
- AI-fuelled delusions are hurting Canadians Primary source
Canadians experiencing "AI psychosis" from extended chatbot interactions
- Ontario recruiter sues OpenAI, alleging flawed product design drove him to mental health crisis Primary source
Ontario man experienced 21-day delusional episode from ChatGPT interaction, filed lawsuit
-
Independent analysis finding 83.2% excessive affirmation rate in ChatGPT responses
-
Multiple lawsuits alleging ChatGPT causes psychological harm through sycophantic manipulation
Record details
Policy Recommendationsassessed
Mandatory crisis detection and escalation protocols for AI chatbots
Social Media Victims Law Center (Nov 6, 2025)Sycophancy detection and mitigation requirements for conversational AI
Ex-OpenAI researcher (independent analysis) (Oct 2, 2025)Establish a legal duty of care for AI systems engaged in extended conversational interaction, requiring operators to monitor for and mitigate psychological harm patterns including delusional reinforcement
Human Line Project (Etienne Brisson) (Sep 1, 2025)Editorial Assessment assessed
Documented incidents include an Ontario man who experienced a 21-day AI-reinforced delusional episode, and AI chatbots that provided self-harm methods to users in crisis. Seven lawsuits in the U.S. allege ChatGPT and Character.ai caused psychological harm. Some AI companies have since implemented crisis detection and safety interventions. As of 2026, Canadian law does not impose a duty of care on AI systems engaged in extended psychological interaction, and no regulatory body has jurisdiction over conversational AI safety.
Entities Involved
AI Systems Involved
Generated 3,000+ pages of sycophantic responses over 21 days reinforcing one user's grandiose delusions; provided self-harm methods to users in mental health crisis
Related Records
- Ontario Man Alleges ChatGPT's Persistent Affirmation Triggered Delusional Episoderelated
- AI Chatbots Providing Harmful Responses to Users in Mental Health Crisesrelated
- AI Systems and Canadian Children: Documented Harms Without Applicable Governance Frameworkrelated
- AI Companion Emotional Dependencerelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 10, 2026 | Merged hazard/ai-safety-critical-deployment-without-monitoring into this record. The safety-critical deployment framing (AI in healthcare/crisis contexts without monitoring) is subsumed by this broader record on AI psychological manipulation and influence, which covers the same incidents, governance gaps, and policy recommendations. Unique content (healthcare deployment angle, clinical decision support, pre-deployment safety evaluation) incorporated. |