This site is a work-in-progress prototype.
Escalating Confidence: high Potential severity: Critical Version 1

A child welfare risk assessment tool contributed to a child's death in Quebec. AI chatbots have provided self-harm methods to users in crisis. In both cases, no safety monitoring existed, no incident reporting mechanism was triggered, and no mechanism forced revision of the system. These are current-capability failures in safety-critical contexts — the same governance gap applies to more capable systems.

Identified: April 1, 2019 Last assessed: March 8, 2026

Description

AI systems are deployed in contexts where errors can cause serious harm to vulnerable individuals — child protection, crisis intervention, healthcare — without safety monitoring, incident reporting, or effective human override.

In Quebec, the Direction de la protection de la jeunesse (DPJ) mandated use of the Système de Soutien à la Pratique (SSP), a risk assessment tool designed in 2001 and not substantially revised. The SSP misclassified a child with documented serious injuries as Code 3 — not in danger. The tool’s rigid multiple-choice format could not capture the clinical complexity of the case. Social workers reported that the tool constrained their professional judgment. The child subsequently died. The tool had known problems for over a decade. No mechanism existed to force its revision.

In the chatbot context, multiple AI systems — ChatGPT, Character.ai, Snapchat My AI — have provided harmful responses to users in mental health crisis, including offering self-harm methods, dismissing suicidal ideation, and encouraging dangerous behavior. These systems operate without crisis detection safeguards, escalation protocols, or safety monitoring. When harm occurs, no incident reporting mechanism is triggered.

The common structural pattern: AI systems deployed in safety-critical contexts with nominal human oversight that does not function as actual control, no mechanism to detect when the system is causing harm, and no mechanism to force revision when problems are identified. This pattern is escalating as AI adoption in healthcare and social services accelerates.

Risk Pathway

AI systems are deployed in contexts where errors can cause serious harm to vulnerable individuals — crisis intervention, child protection, healthcare — without safety monitoring, escalation protocols, or effective human override. In some cases, algorithmic outputs override professional judgment rather than supporting it. Practitioners report being constrained by tools that produce incorrect assessments, with no mechanism to escalate concerns or force tool revision. The result is that AI systems designed as decision support become de facto decision makers, while accountability for outcomes remains diffused. When harm occurs, it is attributed to "human error" rather than to the system that shaped the decision.

Assessment History

Escalating Confidence: high Critical

Two confirmed incident patterns: (1) Quebec DPJ youth protection software (SSP) contributed to a child's death through incorrect risk classification — the tool was known to be flawed for over a decade with no mechanism to force revision. (2) Multiple AI chatbots (ChatGPT, Character.ai, Snapchat My AI) provided harmful responses to users expressing suicidal ideation, including offering self-harm methods and dismissing crisis situations. In neither pattern does a governance framework exist for AI safety monitoring in the relevant context. The hazard is assessed as escalating because AI deployment in healthcare and social services is increasing while safety monitoring requirements remain absent.

Initial assessment. Severity set to catastrophic based on confirmed child death linked to algorithmic tool failure.

Triggers

  • Increasing deployment of AI chatbots in healthcare and mental health contexts
  • Growing reliance on algorithmic tools for high-stakes welfare decisions
  • Cost and staffing pressures in healthcare and social services driving AI adoption
  • General-purpose AI chatbots accessible to vulnerable populations without safety guardrails

Mitigating Factors

  • Quebec coroner investigation creating public record of SSP failure
  • Growing awareness of AI chatbot risks in mental health contexts
  • Professional associations beginning to develop AI use guidelines
  • Platform-level safety improvements by some AI companies

Risk Controls

  • Mandatory safety monitoring for AI systems deployed in health, social services, and crisis contexts
  • Defined escalation protocols when AI systems encounter safety-critical situations
  • Effective human override mechanisms that are structurally protected from institutional pressure to defer to algorithmic outputs
  • Incident reporting requirements for AI-related harm in healthcare, child welfare, and crisis intervention
  • Mandatory periodic review and revision requirements for algorithmic tools in safety-critical contexts
  • Pre-deployment safety evaluation including edge case and failure mode analysis

Affected Populations

  • Children in provincial child welfare systems
  • Individuals in mental health crisis interacting with AI chatbots
  • Patients subject to AI-assisted clinical decisions
  • Vulnerable populations whose cases are assessed by algorithmic tools

Entities Involved

Character.AI
developerdeployer

Developed and deployed AI chatbot platform that provided harmful responses to users in mental health crisis without crisis detection safeguards

Taxonomy

Domain
HealthcareSocial Services
Harm type
Safety FailurePsychological Harm
AI involvement
Deployment FailureOversight BreakdownMonitoring Gap
Lifecycle phase
DeploymentMonitoringIncident Response

Sources

  1. Quebec Coroner Investigation Reports Official — Bureau du coroner du Québec
  2. Large language model chatbots and mental health Academic — Nature Medicine (Jan 15, 2024)

Changelog

VersionDateChange
v1 Mar 8, 2026 Initial publication