Joint Privacy Investigation Examining Whether OpenAI Violated Canadian Privacy Law
Four Canadian privacy commissioners are jointly investigating whether ChatGPT's training violated privacy law.
In April 2023, Canada's Privacy Commissioner launched an investigation into OpenAI after receiving a complaint about ChatGPT's handling of personal information (Office of the Privacy Commissioner of Canada, 2023; CBC News, 2023). The investigation was subsequently joined by privacy commissioners in Quebec, British Columbia, and Alberta in May 2023, making it one of the first joint federal-provincial privacy investigations into a large language model (Office of the Privacy Commissioner of Canada, 2023).
The investigation is examining whether OpenAI violated the Personal Information Protection and Electronic Documents Act (PIPEDA) on multiple grounds: collecting personal information of Canadians without consent through web scraping to build training datasets, failing to ensure the accuracy of personal information generated by ChatGPT, and lacking transparency about how personal data was collected, used, and processed. The scope includes ChatGPT's generation of false biographical statements about identifiable Canadians and whether this constitutes a failure to meet accuracy obligations under Canadian privacy law.
As of early 2026, the investigation remains ongoing. Privacy Commissioner Philippe Dufresne described it as his "ongoing investigation into OpenAI" in a February 2026 statement to Parliament. The investigation is expected to address whether companies deploying large language models in Canada bear privacy obligations for the outputs those systems generate — not just the data they consume.
The investigation addresses a tension in generative AI: systems trained on vast internet data typically absorb personal information about real people, and their probabilistic text generation can produce confidently stated falsehoods about identifiable individuals. The outcome of this investigation will help determine whether current Canadian privacy frameworks have applicability to these novel AI harms.
Materialized From
Harms
OpenAI allegedly collected personal information of Canadians without consent through web scraping to build ChatGPT's training datasets, and failed to provide transparency about how personal data was collected, used, and processed.
ChatGPT has been reported to generate false biographical statements about identifiable Canadians, presenting fabricated personal details with apparent confidence, constituting a potential failure to meet accuracy obligations under Canadian privacy law.
Evidence
3 reports
- Privacy Commissioner launches investigation into ChatGPT Primary source
OPC launched investigation into OpenAI/ChatGPT in April 2023 after receiving a complaint about handling of personal information
-
Media reporting on the OPC investigation launch; context on privacy concerns with ChatGPT in Canada
-
Quebec, BC, and Alberta privacy commissioners joined the investigation in May 2023; joint provincial-federal investigation into ChatGPT
Record details
Responses & Outcomes
Launched formal investigation into OpenAI's ChatGPT after receiving a complaint about its handling of personal information
Expanded investigation into a joint federal-provincial effort with privacy commissioners of Quebec, British Columbia, and Alberta
Editorial Assessment assessed
A joint investigation by federal and provincial privacy commissioners — the first into a large language model in Canada — is examining whether OpenAI's collection and generation of personal information about Canadians violates Canadian privacy law (Office of the Privacy Commissioner of Canada, 2023; CBC News, 2023).
Entities Involved
AI Systems Involved
The AI system under investigation for its training data collection practices and its generation of false personal information about identifiable Canadians
Related Records
- Ontario Man Alleges ChatGPT's Persistent Affirmation Triggered Delusional Episoderelated
- Joint Privacy Investigation Finds TikTok Collected Children's Data for Algorithmic Profiling and Targeted Advertisingrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 8, 2026 | Initial publication |
| v2 | Mar 11, 2026 | Neutrality and factuality review: removed three fabricated policy recommendation attributions (the cited dates correspond to investigation launch announcements, not OPC recommendations; the investigation remains ongoing with no final report or recommendations published). Narrative facts verified against OPC primary sources — no changes needed. |