Déploiement de l'IA dans les établissements d'enseignement canadiens avec des préjudices documentés aux étudiants
Des systèmes d'IA sont déployés dans les établissements d'enseignement canadiens pour la surveillance, l'analyse prédictive, la détection du plagiat et l'évaluation. Des enquêtes provinciales ont constaté des outils de surveillance collectant des données biométriques sans consentement adéquat, des algorithmes prédictifs générant de nouveaux renseignements personnels sur des enfants sans notification parentale, et une détection faciale avec un taux de non-reconnaissance de 57 % pour les visages noirs. Aucun cadre de gouvernance pancanadien ne traite de l'IA dans l'éducation.
AI systems are deployed in Canadian educational institutions for student monitoring, risk prediction, plagiarism detection, and assessment. Multiple provincial privacy investigations have issued findings concerning AI systems affecting students.
The Information and Privacy Commissioner of Ontario investigated McMaster University's use of Respondus Monitor, an AI-powered exam proctoring tool (PI21-00001, February 2024). The IPC found that notice to students about data collection purposes did not meet FIPPA requirements and that contractual safeguards were insufficient. Respondus used students' audio and video recordings — including through third-party researchers — to train its AI system without student consent. The IPC issued findings and recommendations.
Quebec's Commission d'accès à l'information investigated a school board (Centre de services scolaire du Val-des-Cerfs) that used an algorithmic tool to predict grade-six students' dropout risk. The Commission found that the tool produced new personal information — predictive dropout indicators — constituting a collection of personal information under Quebec's public sector privacy law. The school board had not informed parents about the use of their children's data for predictive scoring.
The University of British Columbia's Vancouver and Okanagan Senates passed motions in March 2021 restricting automated remote invigilation tools using algorithmic analysis. Independent research by Lucy Satheesan (reported by VICE Motherboard, April 2021) found that Proctorio's facial detection algorithm had a 57% non-recognition rate for Black faces. The UBC Teaching and Learning Committee cited racial discrimination concerns. Six faculties discontinued Proctorio.
Research published by Stanford University found that AI text detection tools misclassified 61.22% of TOEFL essays written by non-native English speakers as AI-generated. Multiple Canadian universities have adopted and subsequently reconsidered AI detection policies.
In August 2025, a Newfoundland and Labrador provincial education report was found to contain 15 or more citations to sources that do not exist, consistent with AI-generated text. The report's co-chairs — Memorial University professors — stated publicly that the fabricated citations were introduced by the provincial government after they submitted their draft, not by the original authors. The report was withdrawn for revisions.
Education is provincial jurisdiction in Canada. No pan-Canadian governance framework addresses AI use in educational institutions. The Council of Ministers of Education discussed AI's implications at its 112th meeting in June 2024; no coordinated policy has resulted. The Canadian Teachers' Federation published a policy brief in 2024 calling for regulation of AI in K-12 education, describing the legislative landscape as fragmented with no accountability mechanisms specific to AI in schools.
Préjudices
Un logiciel de surveillance par IA a collecté des données biométriques d'étudiants (images faciales, enregistrements audio, schémas comportementaux) et a utilisé les enregistrements pour entraîner l'IA sans le consentement des étudiants
Un algorithme prédictif de décrochage a généré de nouveaux renseignements personnels sur des enfants de sixième année sans notification parentale
L'algorithme de détection faciale avait un taux de non-reconnaissance de 57 % pour les visages noirs lors de la surveillance d'examens
Les outils de détection de texte IA ont classé à tort les écrits d'étudiants ALS comme générés par IA à des taux élevés
Preuves
7 rapports
- IPC Decision PI21-00001: McMaster University / Respondus Monitor Source principale
McMaster's use of Respondus Monitor contravened FIPPA: inadequate notice, insufficient contractual safeguards, non-consensual use of student recordings for AI training
-
UBC Senate restricted automated remote invigilation tools; Teaching and Learning Committee found 57% facial detection non-recognition rate for Black faces; six faculties discontinued Proctorio
-
AI text detectors misclassify ESL writing as AI-generated at elevated rates; ESL submissions up to 30% more likely to be flagged
-
Quebec CAI found school board dropout prediction algorithm produced new personal information constituting unconsented collection under public sector privacy law
-
CTF called for regulation of AI in K-12 education; documented fragmented legislation and absent accountability mechanisms for AI in schools
-
IPC guidance on AI privacy issues in Ontario universities following McMaster/Respondus investigation
-
Provincial education report co-chaired by Memorial University professors contained 15+ non-existent citations consistent with AI-generated text; report withdrawn
Détails de la fiche
Recommandations de politiqueévalué
Réglementation de l'IA dans l'éducation de la maternelle à la 12e année avec des mécanismes de responsabilité
Canadian Teachers' Federation (2024 policy brief) (1 janv. 2024)Restriction des outils de surveillance automatisée à distance utilisant l'analyse algorithmique
UBC Senate motions (March 2021) (1 mars 2021)Évaluation éditoriale évalué
L'éducation est un contexte formateur — les systèmes d'IA déployés dans les écoles et les universités façonnent les résultats scolaires, l'accès aux opportunités et la confiance institutionnelle. Les cas documentés couvrent des types de préjudices distincts : collecte biométrique sans consentement (McMaster/Respondus), profilage prédictif d'enfants (centre de services scolaire du Québec), taux d'erreur racialement disparates dans les outils de surveillance (Proctorio à l'UBC) et biais linguistique dans les outils d'évaluation (détecteurs de texte IA et étudiants ALS). Chacun a été identifié par un processus provincial distinct. La fragmentation de la gouvernance entre les provinces signifie que les conclusions d'une juridiction n'informent pas automatiquement la pratique dans les autres.
Fiches connexes
- AI Systems and Canadian Children: Documented Harms Without Applicable Governance Frameworkrelated
- AI Performance Disparities Affecting Canadian Linguistic and Cultural Communitiesrelated
- AI-Driven Cognitive Deskilling and Automation Over-Reliancerelated