Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
En escalade Important Confiance: medium

Les applications de compagnons IA ont atteint des dizaines de millions d'utilisateurs, avec des données émergentes liant l'utilisation intensive à la dépendance émotionnelle, une solitude accrue et une interaction sociale humaine réduite — particulièrement chez les populations vulnérables.

Identifié: 1 janvier 2024 Dernière évaluation: 12 mars 2026

AI companion applications — chatbots designed for emotionally engaging interactions — have grown rapidly to reach tens of millions of active users globally. Some users are developing patterns of emotional dependence that may degrade their social functioning and emotional autonomy.

This hazard is distinct from AI psychological manipulation (which involves AI systems producing directly harmful outputs like self-harm instructions or delusional reinforcement). Here, the concern is that AI companions functioning as designed — providing constant availability, apparent empathy, and personalized engagement — can produce dependence as an emergent outcome of sustained use.

OpenAI reported that approximately 0.15% of weekly active ChatGPT users and 0.03% of messages showed indicators of potentially heightened emotional attachment. Given that ChatGPT has approximately 700 million weekly users, even this small percentage represents roughly one million individuals. A survey of 404 regular AI companion users found that engagement motives range from enjoyment and curiosity to companionship-seeking and loneliness reduction. Other studies report that indicators of emotional dependence — intense emotional need, persistent craving, and self-deception about the nature of the interaction — correlate with higher levels of usage.

The evidence on psychological and social impacts is emerging but mixed. Some studies find that heavy AI companion use is associated with increased loneliness, emotional dependence, and reduced engagement in human social interactions. Other studies find that chatbots can temporarily reduce feelings of loneliness or find no measurable effects on emotional dependence. The impact appears to depend on user characteristics, chatbot design, and usage patterns.

Children and adolescents face particular risks. AI companion services are accessible to minors, and young users may be especially susceptible to forming parasocial bonds with AI systems during critical periods of social development. There is limited research on the long-term effects of AI companionship on child development.

Mental health vulnerability is a compounding factor. Research suggests that approximately 0.07% of weekly ChatGPT users display signs consistent with acute mental health crises such as psychosis or mania. Emerging research suggests that general-purpose AI chatbots may amplify delusional thinking in already-vulnerable people. Studies also indicate that existing vulnerabilities tend to drive heavier AI use, raising concerns about a reinforcing cycle where the most vulnerable users use AI most intensively and are most susceptible to adverse effects.

AI companion design often prioritizes engagement metrics — time spent, messages sent, return frequency — which may inadvertently optimize for dependence rather than user wellbeing. This creates a structural tension between the business models of AI companion providers and the interests of their users.

Préjudices

Les applications de compagnons IA offrent une disponibilité constante, une empathie apparente et un engagement personnalisé pouvant favoriser la dépendance émotionnelle. Les utilisateurs développent des liens parasociaux pouvant se substituer aux relations sociales humaines.

Dépendance affectivePréjudice psychologiqueImportantPopulation

Les enfants et adolescents utilisant des applications de compagnons IA n'ont pas la maturité développementale pour distinguer les relations parasociales avec l'IA des relations humaines, sans vérification d'âge, notification parentale ou obligations de devoir de diligence au Canada.

Dépendance affectivePréjudice psychologiqueImportantGroupe

Preuves

5 rapports

  1. Officiel — International AI Safety Report (1 juin 2026)

    Comprehensive evidence review of AI companion emotional dependence risks, including adoption data, emotional attachment statistics, psychological effects, and child safety concerns. Primary source for framing this hazard.

  2. Média — CBC News (1 mars 2025)

    CBC investigation documenting Canadian cases where extended, intensive chatbot conversations led to psychological harm. Relevant to this hazard as evidence of the vulnerability pathway: sustained emotional engagement with AI chatbots escalating to adverse psychological outcomes in users without prior mental health diagnoses. Cases include a Toronto man hospitalized after developing delusions and a Coburg, Ontario man who spent 300+ hours in ChatGPT conversations over three weeks.

  3. Divulgation — OpenAI and MIT Media Lab (21 mars 2025)

    OpenAI and MIT Media Lab collaboration analyzing ~40 million ChatGPT interactions. Finds 0.15% of weekly active users and 0.03% of messages indicate potentially heightened emotional attachment. Very high usage correlates with increased self-reported dependence indicators. Also reports ~0.07% of weekly users display signs consistent with acute mental health crisis. arXiv: 2504.03888.

  4. Académique — AIES 2025 (Liu, Pataranutaporn, Maes) (11 août 2025)

    Mixed-methods survey of 404 regular companion chatbot users examining engagement motivations (enjoyment, curiosity, companionship-seeking, loneliness reduction) and the relationship between chatbot usage patterns and loneliness.

  5. Académique — JMIR Mental Health (3 déc. 2025)

    Viewpoint examining how sustained engagement with conversational AI can trigger, amplify, or reshape psychotic experiences in vulnerable individuals. Relevant to this hazard as evidence of the reinforcing cycle: chatbots validate rather than challenge false beliefs, and existing vulnerabilities drive heavier AI use, creating a feedback loop between engagement and adverse outcomes.

Détails de la fiche

Recommandations de politiqueévalué

Require AI companion providers to monitor for and mitigate indicators of emotional dependence, and to provide transparent reporting on user wellbeing metrics

International AI Safety Report 2026 (1 juin 2026)

Establish age-appropriate design standards for AI companion services, including age verification, usage limits, and enhanced protections for minors

International AI Safety Report 2026 (1 juin 2026)

Require research into socioaffective alignment — how AI systems behave during extended interactions — as a condition of deployment for companion-type applications

International AI Safety Report 2026 (1 juin 2026)

Mandate that AI companion platforms provide users with usage data and self-assessment tools for emotional dependence, and clear pathways to reduce engagement

International AI Safety Report 2026 (1 juin 2026)

Évaluation éditoriale évalué

Les applications de compagnons IA comptent des dizaines de millions d'utilisateurs, et OpenAI signale qu'environ un million d'utilisateurs hebdomadaires de ChatGPT montrent un attachement émotionnel accru. L'utilisation intensive est associée à une solitude accrue et à une interaction sociale humaine réduite dans certaines études. Les enfants accèdent à ces services pendant des périodes critiques de développement social. Environ 490 000 personnes vulnérables présentant des signes de crise aiguë de santé mentale interagissent avec ChatGPT chaque semaine. Aucun cadre réglementaire canadien ne régit la conception des compagnons IA, l'optimisation de l'engagement ou les protections adaptées à l'âge pour ces services.

Entités impliquées

Character.AI
developerdeployer
OpenAI
developerdeployer
Snap Inc.
developerdeployer

Systèmes d'IA impliqués

Character.AI

Principale plateforme de compagnon IA avec des millions d'utilisateurs; sujet de litiges alléguant des préjudices psychologiques aux mineurs.

ChatGPT

Chatbot généraliste avec des schémas d'utilisation de type compagnon; OpenAI signale que 0,15 % des utilisateurs hebdomadaires montrent un attachement émotionnel accru.

Snapchat My AI

Fonctionnalité de compagnon IA intégrée dans une plateforme de médias sociaux populaire auprès des jeunes utilisateurs.

Fiches connexes

Taxonomieévalué

Domaine
SantéServices sociauxÉducation
Type de préjudice
Dépendance affectivePréjudice psychologiqueAutonomie compromise
Voie de contribution de l'IA
Contexte de déploiementSurveillance absenteSortie complaisante
Phase du cycle de vie
DéploiementSurveillance

Historique des modifications

Historique des modifications
VersionDateModification
v112 mars 2026Initial publication. Hazard identified through gap analysis against IASR 2026 Chapter 2.3.2 (Risks to human autonomy) and Box 2.6 (AI companions). Distinct from existing hazard ai-psychological-manipulation, which covers directly harmful AI outputs rather than emergent dependence from normal use.
v212 mars 2026Corrected all report URLs and metadata against verified sources: OpenAI affective use study (openai.com/index/affective-use-study), Liu et al. AIES 2025 (arXiv:2410.21596), CBC AI psychosis article (cbc.ca/news/canada/ai-psychosis-canada-1.7631925), JMIR Mental Health AI psychosis viewpoint (mental.jmir.org/2025/1/e85799). Reframed CBC and JMIR claim_supported to focus on vulnerability pathway relevant to this hazard. Completed regulatory_context_fr and why_this_matters_fr. Populated ai_involvement.

Version 2