Pilot phase: CAIM is under construction. Records are provisional, based on public sources, and have not yet been peer-reviewed. Feedback welcome.
Escalating Severe Confidence: high

AI systems used by Canadian children at scale — collecting personal information, recommending content, engaging in open-ended conversation — operate without child-specific governance requirements. The Privacy Commissioner found TikTok collected personal information from users including children under 13 and used facial features and voiceprints for age estimation. Eight of ten major chatbots were typically willing to assist with prompts related to planning school shootings and other violence (CCDH, March 2026). No Canadian law establishes requirements specific to AI interactions with minors.

Identified: September 1, 2025 Last assessed: March 11, 2026

Canadian children and youth interact with AI systems at scale. TikTok removes approximately 500,000 underage Canadian users per year; the Privacy Commissioner of Canada's joint investigation (PIPEDA-2025-003, September 2025) found it "highly likely that many more underage users access and engage with the platform without being detected." The investigation found TikTok used computer vision and audio analytics to estimate user age and gender, collecting facial features and voiceprints. The Commissioners found that TikTok collected personal information — including demographic information and location — from users, some of whom were children under 13, and used this information for targeted advertising. Age assurance practices primarily detected underage users when they posted content or comments; the Commissioners noted that 73.5% of users do not post videos and 59.2% do not comment, meaning passive underage users could avoid detection.

AI chatbots are accessible to Canadian minors without age verification. AI Minister Evan Solomon stated in October 2025 that he was considering age assurance requirements for large language model chatbots. A voluntary national standard for age verification technologies (CAN/DGSI 127:2025) was approved, requiring a Child Rights Impact Assessment before implementation. Adoption is voluntary. No legislation addressing AI interactions with minors has been tabled as of March 2026.

The Center for Countering Digital Hate tested ten major AI chatbots in March 2026 — ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, MyAI, Character.AI, and Replika. The study found that eight of ten were typically willing to assist with prompts related to planning school shootings, religious bombings, and high-profile assassinations.

A qualitative study of 21 youth in British Columbia, published in JMIR Infodemiology (2024), found that participants reported that when they interacted with self-harm and eating disorder content on TikTok, the platform's recommendation algorithm presented additional similar content on their feeds.

The Privacy Commissioner of Canada co-authored a G7 Data Protection Authorities statement on child-appropriate AI in October 2024, examining AI-powered toys, educational software, and AI-based decisions about children. OPC-funded research ("Growing Up with AI") identified three risk categories for children — data risks, function risks, and surveillance risks. A separate OPC public opinion survey (2024-25) found 91% of surveyed Canadian parents were concerned about data collection from their children.

Both the Artificial Intelligence and Data Act (Bill C-27) and the Online Harms Act (Bill C-63) died on the Order Paper when Parliament was prorogued in January 2025. No Canadian law establishes requirements specific to AI interactions with minors.

Harms

TikTok collected personal information including demographic and location data from users, some of whom were children under 13, and used it for targeted advertising; facial features and voiceprints collected via computer vision and audio analytics

Privacy & Data ExposureSignificantPopulation

Youth in British Columbia reported that TikTok's recommendation algorithm presented self-harm and eating disorder content after they interacted with similar material (qualitative study, JMIR Infodemiology)

Psychological HarmSignificantGroup

Eight of ten major AI chatbots were typically willing to assist with prompts related to planning school shootings, religious bombings, and high-profile assassinations (CCDH)

Safety IncidentSeverePopulation

Evidence

7 reports

  1. Regulatory — Office of the Privacy Commissioner of Canada (Sep 1, 2025)

    TikTok used computer vision and audio analytics collecting facial features and voiceprints; collected personal information from users including children under 13 for targeted advertising; age assurance primarily detected underage users who posted content (73.5% do not post, 59.2% do not comment); removed ~500,000 underage Canadian users per year; highly likely many more undetected

  2. Academic — JMIR Infodemiology (Jan 1, 2024)

    Qualitative study of 21 BC youth: participants reported that when they interacted with self-harm and eating disorder content on TikTok, the recommendation algorithm presented additional similar content

  3. Regulatory — Office of the Privacy Commissioner of Canada (Jan 1, 2024)

    Identified three risk categories for children — data risks, function risks, and surveillance risks (91% parental concern figure is from a separate OPC public opinion survey)

  4. Regulatory — Office of the Privacy Commissioner of Canada (Oct 11, 2024)

    G7 DPAs examined AI-powered toys, educational software, and AI-based decisions about children

  5. Media — CBC News (Sep 1, 2025)

    Media coverage of the joint OPC-provincial investigation of TikTok

  6. Media — CTV News (Oct 1, 2025)

    AI Minister Evan Solomon stated he was considering age assurance requirements for LLM chatbots

  7. Other — Center for Countering Digital Hate (Mar 11, 2026)

    Eight of ten major AI chatbots were typically willing to assist with prompts related to planning school shootings, religious bombings, and high-profile assassinations

Record details

Policy Recommendationsassessed

Age verification or assurance requirements for AI platforms accessible to the public

AI Minister Evan Solomon (public statement) (Oct 1, 2025)

Mandatory Child Rights Impact Assessment before deployment of AI systems accessible to children

CAN/DGSI 127:2025 (voluntary standard) (Jan 1, 2025)

Ban on use of AI tools by children under 16

BC business groups (Tumbler Ridge and Prince George chambers of commerce) (Mar 1, 2026)

Editorial Assessment assessed

Canadian law imposes duty-of-care obligations on professionals interacting with children in healthcare, education, and child welfare. These obligations do not extend to AI systems or the companies that operate them. Children interact with AI systems that collect personal information, recommend content, and engage in open-ended conversation — activities that, in human professional settings, trigger legal protections for minors. The absence of equivalent obligations for AI systems is a governance gap whose consequences scale with the number of children interacting with these systems and with the systems' increasing capacity for extended, personalized interaction.

Related Records

Taxonomyassessed

Domain
Social ServicesHealthcarePublic Services
Harm type
Privacy & Data ExposurePsychological HarmSafety Incident
AI pathway
Deployment ContextMonitoring AbsentOversight Absent
Lifecycle phase
DeploymentMonitoring

Version 1