AI Companion Emotional Dependence
AI companion apps have reached tens of millions of users, with emerging evidence linking heavy use to emotional dependence, increased loneliness, and reduced human social interaction — particularly among vulnerable populations.
AI companion applications — chatbots designed for emotionally engaging interactions — have grown rapidly to reach tens of millions of active users globally. Some users are developing patterns of emotional dependence that may degrade their social functioning and emotional autonomy.
This hazard is distinct from AI psychological manipulation (which involves AI systems producing directly harmful outputs like self-harm instructions or delusional reinforcement). Here, the concern is that AI companions functioning as designed — providing constant availability, apparent empathy, and personalized engagement — can produce dependence as an emergent outcome of sustained use.
OpenAI reported that approximately 0.15% of weekly active ChatGPT users and 0.03% of messages showed indicators of potentially heightened emotional attachment. Given that ChatGPT has approximately 700 million weekly users, even this small percentage represents roughly one million individuals. A survey of 404 regular AI companion users found that engagement motives range from enjoyment and curiosity to companionship-seeking and loneliness reduction. Other studies report that indicators of emotional dependence — intense emotional need, persistent craving, and self-deception about the nature of the interaction — correlate with higher levels of usage.
The evidence on psychological and social impacts is emerging but mixed. Some studies find that heavy AI companion use is associated with increased loneliness, emotional dependence, and reduced engagement in human social interactions. Other studies find that chatbots can temporarily reduce feelings of loneliness or find no measurable effects on emotional dependence. The impact appears to depend on user characteristics, chatbot design, and usage patterns.
Children and adolescents face particular risks. AI companion services are accessible to minors, and young users may be especially susceptible to forming parasocial bonds with AI systems during critical periods of social development. There is limited research on the long-term effects of AI companionship on child development.
Mental health vulnerability is a compounding factor. Research suggests that approximately 0.07% of weekly ChatGPT users display signs consistent with acute mental health crises such as psychosis or mania. Emerging research suggests that general-purpose AI chatbots may amplify delusional thinking in already-vulnerable people. Studies also indicate that existing vulnerabilities tend to drive heavier AI use, raising concerns about a reinforcing cycle where the most vulnerable users use AI most intensively and are most susceptible to adverse effects.
AI companion design often prioritizes engagement metrics — time spent, messages sent, return frequency — which may inadvertently optimize for dependence rather than user wellbeing. This creates a structural tension between the business models of AI companion providers and the interests of their users.
Harms
AI companion applications provide constant availability, apparent empathy, and personalized engagement that can foster emotional dependence. Users develop parasocial bonds that may substitute for or displace human social relationships, with engagement-optimized design creating structural incentives toward dependence.
Children and adolescents using AI companion applications lack the developmental maturity to distinguish parasocial AI relationships from human relationships, with no age verification, parental notification, or duty-of-care requirements governing these applications in Canada.
Evidence
5 reports
-
Comprehensive evidence review of AI companion emotional dependence risks, including adoption data, emotional attachment statistics, psychological effects, and child safety concerns. Primary source for framing this hazard.
-
CBC investigation documenting Canadian cases where extended, intensive chatbot conversations led to psychological harm. Relevant to this hazard as evidence of the vulnerability pathway: sustained emotional engagement with AI chatbots escalating to adverse psychological outcomes in users without prior mental health diagnoses. Cases include a Toronto man hospitalized after developing delusions and a Coburg, Ontario man who spent 300+ hours in ChatGPT conversations over three weeks.
-
OpenAI and MIT Media Lab collaboration analyzing ~40 million ChatGPT interactions. Finds 0.15% of weekly active users and 0.03% of messages indicate potentially heightened emotional attachment. Very high usage correlates with increased self-reported dependence indicators. Also reports ~0.07% of weekly users display signs consistent with acute mental health crisis. arXiv: 2504.03888.
-
Mixed-methods survey of 404 regular companion chatbot users examining engagement motivations (enjoyment, curiosity, companionship-seeking, loneliness reduction) and the relationship between chatbot usage patterns and loneliness.
-
Viewpoint examining how sustained engagement with conversational AI can trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Relevant to this hazard as evidence of the reinforcing cycle: chatbots validate rather than challenge false beliefs, and existing vulnerabilities drive heavier AI use, creating a feedback loop between engagement and adverse outcomes.
Record details
Policy Recommendationsassessed
Require AI companion providers to monitor for and mitigate indicators of emotional dependence, and to provide transparent reporting on user wellbeing metrics
International AI Safety Report 2026 (Jun 1, 2026)Establish age-appropriate design standards for AI companion services, including age verification, usage limits, and enhanced protections for minors
International AI Safety Report 2026 (Jun 1, 2026)Require research into socioaffective alignment — how AI systems behave during extended interactions — as a condition of deployment for companion-type applications
International AI Safety Report 2026 (Jun 1, 2026)Mandate that AI companion platforms provide users with usage data and self-assessment tools for emotional dependence, and clear pathways to reduce engagement
International AI Safety Report 2026 (Jun 1, 2026)Editorial Assessment assessed
AI companion applications have tens of millions of users, and OpenAI reports that roughly one million weekly ChatGPT users show elevated emotional attachment. Heavy use is associated with increased loneliness and reduced human social interaction in some studies. Children access these services during critical social development periods. Roughly 490,000 vulnerable individuals with signs of acute mental health crisis interact with ChatGPT each week. No Canadian regulatory framework governs AI companion design, engagement optimization, or age-appropriate protections for these services.
Entities Involved
AI Systems Involved
Primary AI companion platform with millions of users; subject of litigation alleging psychological harm to minors.
General-purpose chatbot with companion-like usage patterns; OpenAI reports 0.15% of weekly users show elevated emotional attachment.
AI companion feature integrated into social media platform popular with young users.
Related Records
- AI Psychological Manipulation and Influencerelated
- AI Systems and Canadian Children: Documented Harms Without Applicable Governance Frameworkrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 12, 2026 | Initial publication. Hazard identified through gap analysis against IASR 2026 Chapter 2.3.2 (Risks to human autonomy) and Box 2.6 (AI companions). Distinct from existing hazard ai-psychological-manipulation, which covers directly harmful AI outputs rather than emergent dependence from normal use. |
| v2 | Mar 12, 2026 | Corrected all report URLs and metadata against verified sources: OpenAI affective use study (openai.com/index/affective-use-study), Liu et al. AIES 2025 (arXiv:2410.21596), CBC AI psychosis article (cbc.ca/news/canada/ai-psychosis-canada-1.7631925), JMIR Mental Health AI psychosis viewpoint (mental.jmir.org/2025/1/e85799). Reframed CBC and JMIR claim_supported to focus on vulnerability pathway relevant to this hazard. Completed regulatory_context_fr and why_this_matters_fr. Populated ai_involvement. |