AI Deployment in Canadian Educational Institutions with Documented Harms to Students
AI systems are deployed in Canadian educational institutions for proctoring, predictive analytics, plagiarism detection, and assessment. Provincial privacy investigations found AI proctoring tools collecting biometric data under consent practices that did not meet privacy requirements (Ontario IPC enforcement order against McMaster/Respondus), predictive algorithms generating new personal information about children without parental notification (Quebec CAI), and facial detection with a 57% non-recognition rate for Black faces (UBC assessment of Proctorio). No pan-Canadian governance framework addresses AI in education.
AI systems are deployed in Canadian educational institutions for student monitoring, risk prediction, plagiarism detection, and assessment. Multiple provincial privacy investigations have issued findings concerning AI systems affecting students.
The Information and Privacy Commissioner of Ontario investigated McMaster University's use of Respondus Monitor, an AI-powered exam proctoring tool (PI21-00001, February 2024). The IPC found that notice to students about data collection purposes did not meet FIPPA requirements and that contractual safeguards were insufficient. Respondus used students' audio and video recordings — including through third-party researchers — to train its AI system without student consent. The IPC issued findings and recommendations.
Quebec's Commission d'accès à l'information investigated a school board (Centre de services scolaire du Val-des-Cerfs) that used an algorithmic tool to predict grade-six students' dropout risk. The Commission found that the tool produced new personal information — predictive dropout indicators — constituting a collection of personal information under Quebec's public sector privacy law. The school board had not informed parents about the use of their children's data for predictive scoring.
The University of British Columbia's Vancouver and Okanagan Senates passed motions in March 2021 restricting automated remote invigilation tools using algorithmic analysis. Independent research by Lucy Satheesan (reported by VICE Motherboard, April 2021) found that Proctorio's facial detection algorithm had a 57% non-recognition rate for Black faces. The UBC Teaching and Learning Committee cited racial discrimination concerns. Six faculties discontinued Proctorio.
Research published by Stanford University found that AI text detection tools misclassified 61.22% of TOEFL essays written by non-native English speakers as AI-generated. Multiple Canadian universities have adopted and subsequently reconsidered AI detection policies.
In August 2025, a Newfoundland and Labrador provincial education report was found to contain 15 or more citations to sources that do not exist, consistent with AI-generated text. The report's co-chairs — Memorial University professors — stated publicly that the fabricated citations were introduced by the provincial government after they submitted their draft, not by the original authors. The report was withdrawn for revisions.
Education is provincial jurisdiction in Canada. No pan-Canadian governance framework addresses AI use in educational institutions. The Council of Ministers of Education discussed AI's implications at its 112th meeting in June 2024; no coordinated policy has resulted. The Canadian Teachers' Federation published a policy brief in 2024 calling for regulation of AI in K-12 education, describing the legislative landscape as fragmented with no accountability mechanisms specific to AI in schools.
Harms
AI proctoring software collected student biometric data (facial images, audio recordings, behavioral patterns) and used recordings to train AI without student consent
Predictive dropout algorithm generated new personal information about grade-six children without parental notification
Facial detection algorithm had 57% non-recognition rate for Black faces in exam proctoring
AI text detection tools misclassified ESL student writing as AI-generated at elevated rates
Evidence
7 reports
-
McMaster's use of Respondus Monitor contravened FIPPA: inadequate notice, insufficient contractual safeguards, non-consensual use of student recordings for AI training
-
UBC Senate restricted automated remote invigilation tools; Teaching and Learning Committee found 57% facial detection non-recognition rate for Black faces; six faculties discontinued Proctorio
-
AI text detectors misclassify ESL writing as AI-generated at elevated rates; ESL submissions up to 30% more likely to be flagged
-
Quebec CAI found school board dropout prediction algorithm produced new personal information constituting unconsented collection under public sector privacy law
-
CTF called for regulation of AI in K-12 education; documented fragmented legislation and absent accountability mechanisms for AI in schools
-
IPC guidance on AI privacy issues in Ontario universities following McMaster/Respondus investigation
-
Provincial education report co-chaired by Memorial University professors contained 15+ non-existent citations consistent with AI-generated text; report withdrawn
Record details
Policy Recommendationsassessed
Regulation of AI in K-12 education with accountability mechanisms
Canadian Teachers' Federation (2024 policy brief) (Jan 1, 2024)Restriction of automated remote invigilation tools using algorithmic analysis
UBC Senate motions (March 2021) (Mar 1, 2021)Editorial Assessment assessed
Education is a formative context — AI systems deployed in schools and universities shape academic outcomes, access to opportunity, and institutional trust. The documented cases span distinct harm types: biometric collection without consent (McMaster/Respondus), predictive profiling of children (Quebec school board), racially disparate error rates in monitoring tools (Proctorio at UBC), and linguistic bias in assessment tools (AI text detectors and ESL students). Each was identified through a separate provincial process. The fragmentation of governance across provinces means that findings in one jurisdiction do not automatically inform practice in others, and that students in different provinces face different levels of protection from the same categories of AI deployment.
Related Records
- AI Systems and Canadian Children: Documented Harms Without Applicable Governance Frameworkrelated
- AI Performance Disparities Affecting Canadian Linguistic and Cultural Communitiesrelated
- AI-Driven Cognitive Deskilling and Automation Over-Reliancerelated