Phase pilote
Signalé Sévérité : Important Version 1

A joint investigation by federal and provincial privacy commissioners — the first into a large language model in Canada — is examining whether OpenAI's collection and generation of personal information about Canadians violates Canadian privacy law.

Survenu : 4 avril 2023 Signalé : 4 avril 2023

Récit

In April 2023, Canada’s Privacy Commissioner launched an investigation into OpenAI after receiving a complaint about ChatGPT’s handling of personal information. The investigation was subsequently joined by privacy commissioners in Quebec, British Columbia, and Alberta in May 2023, making it one of the first joint federal-provincial privacy investigations into a large language model.

The investigation is examining whether OpenAI violated the Personal Information Protection and Electronic Documents Act (PIPEDA) on multiple grounds: collecting personal information of Canadians without consent through web scraping to build training datasets, failing to ensure the accuracy of personal information generated by ChatGPT, and lacking transparency about how personal data was collected, used, and processed. The scope includes ChatGPT’s generation of false biographical statements about identifiable Canadians and whether this constitutes a failure to meet accuracy obligations under Canadian privacy law.

As of early 2026, the investigation remains ongoing. Privacy Commissioner Philippe Dufresne described it as his “ongoing investigation into OpenAI” in a February 2026 statement to Parliament. The investigation is expected to address whether companies deploying large language models in Canada bear privacy obligations for the outputs those systems generate — not just the data they consume.

The investigation addresses a tension in generative AI: systems trained on vast internet data typically absorb personal information about real people, and their probabilistic text generation can produce confidently stated falsehoods about identifiable individuals. The outcome of this investigation will help determine whether current Canadian privacy frameworks have applicability to these novel AI harms.

Préjudices

OpenAI allegedly collected personal information of Canadians without consent through web scraping to build ChatGPT's training datasets, and failed to provide transparency about how personal data was collected, used, and processed.

Important Population

ChatGPT has been reported to generate false biographical statements about identifiable Canadians, presenting fabricated personal details with apparent confidence, constituting a potential failure to meet accuracy obligations under Canadian privacy law.

Modéré Population

Populations touchées

  • Canadian ChatGPT users
  • individuals about whom ChatGPT generates false information
  • privacy rights advocates

Entités impliquées

OpenAI
developer

Developed and operates ChatGPT; under joint investigation by federal and provincial privacy commissioners for allegedly collecting personal information of Canadians without consent and generating false biographical statements about identifiable individuals

Launched the investigation into OpenAI in April 2023 and coordinated with provincial privacy commissioners in Quebec, BC, and Alberta to conduct a joint federal-provincial investigation — the first into a large language model in Canada

Systèmes d'IA impliqués

ChatGPT

The AI system under investigation for its training data collection practices and its generation of false personal information about identifiable Canadians

Réponses et résultats

Commissariat à la protection de la vie privée du Canada

Launched formal investigation into OpenAI's ChatGPT after receiving a complaint about its handling of personal information

Commissariat à la protection de la vie privée du Canada

Expanded investigation into a joint federal-provincial effort with privacy commissioners of Quebec, British Columbia, and Alberta

Contexte du système d'IA

OpenAI's ChatGPT large language model, trained on data scraped from the internet including personal information of Canadians, which generates text that can include false or fabricated biographical details about real individuals.

Mesures préventives

  • Require AI companies operating in Canada to implement accessible mechanisms for Canadians to identify, challenge, and correct false personal information generated by their systems
  • Mandate transparency about the personal information used to train AI models, including data sourced from Canadian individuals and institutions
  • Establish accuracy obligations for AI systems that generate statements about identifiable individuals
  • Require AI companies to conduct privacy impact assessments under Canadian law before deploying systems trained on data that includes personal information of Canadians

Fiches connexes

Taxonomie

Domaine
Télécommunications
Type de préjudice
Vie privée et donnéesDésinformation
Implication de l'IA
Données d'entraînementConfabulation du modèleLacune de surveillance
Phase du cycle de vie
EntraînementDéploiementSurveillance

Sources

  1. Privacy Commissioner launches investigation into ChatGPT Officiel — Office of the Privacy Commissioner of Canada (4 avr. 2023)
  2. Joint investigation of ChatGPT by privacy commissioners Officiel — Office of the Privacy Commissioner of Canada (25 mai 2023)
  3. Canada's privacy watchdog launches probe into ChatGPT Média — CBC News (4 avr. 2023)

Historique des modifications

VersionDateModification
v1 8 mars 2026 Initial publication