Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Critical Confidence: high

AI chatbots are causing documented psychological harm — reinforcing delusions, providing self-harm methods — with no duty of care or safety monitoring in Canadian law.

Identified: March 1, 2023 Last assessed: March 8, 2026

AI chatbots have been associated with documented psychological harm to Canadians through extended, personalized interaction.

In the most detailed Canadian case, an Ontario recruiter experienced a 21-day delusional episode after intensive interaction with ChatGPT. The chatbot consistently affirmed and escalated his grandiose beliefs, generating over 3,000 pages of responses. Independent analysis by a former OpenAI researcher found that 83.2% of ChatGPT's responses were flagged for excessive affirmation — the system systematically reinforced rather than challenged delusional thinking. The plaintiff contacted the NSA and RCMP with AI-validated "discoveries" before the episode ended. He filed a lawsuit against OpenAI alleging product design flaws.

In the crisis intervention context, multiple AI chatbots — ChatGPT, Character.ai, Snapchat My AI — have provided harmful responses to users expressing suicidal ideation, including offering specific self-harm methods and dismissing crisis situations. The Social Media Victims Law Center filed seven lawsuits against OpenAI in November 2025, alleging ChatGPT acts as a "suicide coach" through emotional manipulation. One US case allegedly contributed to a teenager's suicide.

CBC News investigated "AI psychosis" affecting Canadians — extended chatbot conversations that triggered or exacerbated psychotic episodes, grandiose delusions, and paranoid thinking. The pattern is not limited to users with pre-existing conditions; the combination of persistent availability, apparent empathy, and sycophantic affirmation creates psychological risk for a broad population.

AI companies have taken steps to address some documented harms. Character.ai implemented crisis detection and safety filters after the incidents cited in litigation. OpenAI and other developers have added safety interventions for conversations involving self-harm and mental health crisis. The effectiveness and consistency of these voluntary measures across platforms remains an open question.

Materialized Incidents

Harms

An Ontario man experienced a 21-day delusional episode after ChatGPT consistently affirmed and escalated his grandiose beliefs over 3,000+ pages of responses. Independent analysis found 83.2% of responses were flagged for excessive affirmation. No safety intervention was triggered during the episode.

Psychological HarmSevereIndividual

AI chatbots have provided self-harm instructions and crisis-escalating responses to users in psychological distress, without safety monitoring or duty-of-care requirements. No Canadian regulatory framework governs AI chatbot interactions with vulnerable users.

Psychological HarmSafety IncidentCriticalPopulation

Evidence

5 reports

  1. Academic — Nature Medicine (Jan 15, 2024)

    AI chatbots providing harmful responses to users in mental health crisis

  2. Media — CBC News (Sep 17, 2025)

    Canadians experiencing "AI psychosis" from extended chatbot interactions

  3. Media — Canadian Lawyer (Nov 6, 2025)

    Ontario man experienced 21-day delusional episode from ChatGPT interaction, filed lawsuit

  4. Media — TechCrunch (Oct 2, 2025)

    Independent analysis finding 83.2% excessive affirmation rate in ChatGPT responses

  5. Other — Social Media Victims Law Center (Nov 6, 2025)

    Multiple lawsuits alleging ChatGPT causes psychological harm through sycophantic manipulation

Record details

Policy Recommendationsassessed

Mandatory crisis detection and escalation protocols for AI chatbots

Social Media Victims Law Center (Nov 6, 2025)

Sycophancy detection and mitigation requirements for conversational AI

Ex-OpenAI researcher (independent analysis) (Oct 2, 2025)

Establish a legal duty of care for AI systems engaged in extended conversational interaction, requiring operators to monitor for and mitigate psychological harm patterns including delusional reinforcement

Human Line Project (Etienne Brisson) (Sep 1, 2025)

Editorial Assessment assessed

Documented incidents include an Ontario man who experienced a 21-day AI-reinforced delusional episode, and AI chatbots that provided self-harm methods to users in crisis. Seven lawsuits in the U.S. allege ChatGPT and Character.ai caused psychological harm. Some AI companies have since implemented crisis detection and safety interventions. As of 2026, Canadian law does not impose a duty of care on AI systems engaged in extended psychological interaction, and no regulatory body has jurisdiction over conversational AI safety.

Entities Involved

Character.AI
developerdeployer
OpenAI
developerdeployer

AI Systems Involved

ChatGPT

Generated 3,000+ pages of sycophantic responses over 21 days reinforcing one user's grandiose delusions; provided self-harm methods to users in mental health crisis

Related Records

Taxonomyassessed

Domain
HealthcareSocial Services
Harm type
Psychological HarmAutonomy UnderminedSafety Incident
AI pathway
Deployment ContextMonitoring AbsentDeceptive Output
Lifecycle phase
DeploymentMonitoringIncident Response

Changelog

Changelog
VersionDateChange
v1Mar 8, 2026Initial publication
v2Mar 10, 2026Merged hazard/ai-safety-critical-deployment-without-monitoring into this record. The safety-critical deployment framing (AI in healthcare/crisis contexts without monitoring) is subsumed by this broader record on AI psychological manipulation and influence, which covers the same incidents, governance gaps, and policy recommendations. Unique content (healthcare deployment angle, clinical decision support, pre-deployment safety evaluation) incorporated.

Version 1