Phase pilote : CAIM est en construction. Les fiches sont provisoires, basées sur des sources publiques, et n’ont pas encore été révisées par des pairs. Commentaires bienvenus.
Actif Important Confiance: medium

L'utilisation routinière de l'IA est associée à des déclins mesurables de la pensée critique, de la compétence professionnelle et de la détection d'erreurs — des effets qui pourraient miner la surveillance humaine sur laquelle repose la gouvernance de l'IA.

Identifié: 1 juin 2023 Dernière évaluation: 12 mars 2026

Emerging evidence indicates that routine use of AI systems for cognitive tasks can degrade users' critical thinking skills, professional competence, and ability to detect errors — a pattern described as "cognitive deskilling."

In one clinical study, clinicians who used AI-assisted colonoscopy for several months showed approximately 6 percentage points lower adenoma detection rates when the AI assistance was removed, compared to their baseline before AI exposure. The finding suggests that sustained reliance on AI support can erode professional skills that are essential when the AI is unavailable or incorrect.

A study of 666 participants found that heavier AI tool use was associated with lower self-assessed critical thinking, mediated by cognitive offloading — the tendency to delegate cognitive work to external systems rather than engaging with it directly. While cognitive offloading can improve efficiency, the research suggests it may come at the cost of maintaining the reasoning skills that underpin autonomous decision-making.

Automation bias — the tendency to over-rely on automated outputs while discounting contradictory information — compounds these effects. In a randomized experiment with 2,784 participants, participants were significantly less likely to correct erroneous AI suggestions when doing so required extra effort or when they held favorable attitudes toward AI. This pattern has been documented across domains, from aviation monitoring to medical diagnostics.

The phenomenon extends to everyday AI use. In a study of 1,506 participants, those who used an opinionated AI writing assistant had both the opinions expressed in their text and their own subsequently reported opinions shifted toward those suggested by the model — often without realizing the shift had occurred. More broadly, analysis of ChatGPT usage patterns shows that a large share of interactions involve cognitively demanding activities such as writing, problem-solving, and information-seeking — precisely the tasks where delegation to AI risks skill atrophy.

The Canadian implications are significant. Health Canada has issued guidance on AI as a medical device, but does not address the deskilling risks to clinicians who become dependent on AI-assisted diagnosis. The TBS Directive on Automated Decision-Making governs federal AI use but does not require monitoring of public servants' decision-making competence over time. Canadian educational institutions are rapidly integrating AI tools without systematic assessment of effects on student learning and skill development. If AI systems become unreliable or are withdrawn, a deskilled workforce may lack the competence to compensate.

Préjudices

Une étude clinique a constaté que les cliniciens ayant utilisé la coloscopie assistée par IA pendant plusieurs mois montraient des taux de détection d'adénomes inférieurs d'environ 6 points de pourcentage lorsque l'assistance IA était retirée, suggérant que l'utilisation soutenue de l'IA peut dégrader les compétences diagnostiques professionnelles.

Déqualification cognitiveModéréSecteur

Le biais d'automatisation amène les utilisateurs à accepter les résultats de l'IA sans esprit critique, même lorsque ceux-ci contiennent des erreurs. Cela crée un cycle auto-renforçant : à mesure que les utilisateurs pratiquent moins, leur capacité à détecter les erreurs de l'IA diminue, augmentant la dépendance envers le système qui dégrade leurs compétences.

Déqualification cognitiveIncident de sécuritéImportantPopulation

Preuves

6 rapports

  1. Officiel — International AI Safety Report (1 juin 2026)

    Comprehensive evidence review of cognitive deskilling and automation over-reliance risks from general-purpose AI. Documents clinical study on clinician skill degradation (~6 percentage points in adenoma detection), critical thinking correlation study (n=666), automation bias experiment (n=2,784), and AI writing influence study.

  2. Académique — ACM CHI 2023 (Jakesch et al.) (23 avr. 2023)

    Randomized experiment with 1,506 participants finding that those who used an opinionated AI writing assistant had both the opinions expressed in their text and their own subsequently reported opinions shifted toward those suggested by the model. Participants were largely unaware of the opinion shift.

  3. Académique — Societies (MDPI) — Gerlich (1 janv. 2025)

    Mixed-method study of 666 participants finding that heavier AI tool use was associated with lower self-assessed critical thinking, mediated by cognitive offloading — the tendency to delegate cognitive work to external systems rather than engaging with it directly.

  4. Académique — NBER (Chatterji et al.) (1 juin 2025)

    Analysis of ChatGPT usage patterns based on ~18 billion weekly messages from ~700 million users. Finds that cognitively demanding activities — writing, problem-solving, information-seeking — constitute a large share of interactions, precisely the domains where delegation to AI risks skill atrophy.

  5. Académique — Lancet Gastroenterology & Hepatology (13 août 2025)

    Multicentre observational study finding that endoscopists' adenoma detection rate declined by approximately 6 percentage points (from ~28% to ~22%) for colonoscopies performed without AI assistance, after the introduction of AI-assisted colonoscopy. Evidence of clinician deskilling through dependence on AI decision support.

  6. Académique — arXiv (Beck, Eckman, Kern, Kreuter) (10 sept. 2025)

    Randomized experiment with 2,784 participants finding that requiring corrections for flagged AI errors reduced engagement and increased the tendency to accept incorrect suggestions. Individual attitudes toward AI were the strongest predictor of performance: skeptical evaluators detected errors more effectively, while those favorable toward AI showed overreliance on automated suggestions.

Détails de la fiche

Recommandations de politiqueévalué

Mandatory post-deployment monitoring of human decision-making competence in safety-critical domains where AI is deployed

International AI Safety Report 2026 (1 juin 2026)

Periodic competency testing for professionals who routinely use AI decision-support, assessing performance both with and without AI assistance

International AI Safety Report 2026 (1 juin 2026)

AI literacy programs that teach effective AI use while maintaining independent reasoning skills

International AI Safety Report 2026 (1 juin 2026)

Design requirements for AI systems in professional settings to include periodic user engagement prompts that counteract automation bias

International AI Safety Report 2026 (1 juin 2026)

Évaluation éditoriale évalué

Des études documentent une perte de précision des cliniciens dans la détection d'adénomes après des mois de coloscopie assistée par IA, et des scores plus faibles en pensée critique chez les utilisateurs d'IA. Dans une expérience randomisée, des participants n'ont pas corrigé les erreurs de l'IA lorsque la correction exigeait un effort. Alors que les outils d'IA se répandent dans les soins de santé, les services publics et l'éducation au Canada, la déqualification risque de créer une population moins capable de détecter les défaillances de l'IA — précisément quand la surveillance est la plus importante. Aucun cadre réglementaire canadien ne traite de ce risque.

Entités impliquées

Fiches connexes

Taxonomieévalué

Domaine
SantéÉducationServices publics
Type de préjudice
Déqualification cognitiveAutonomie compromiseIncident de sécurité
Voie de contribution de l'IA
Contexte de déploiementSupervision absenteSortie complaisante
Phase du cycle de vie
DéploiementSurveillance

Historique des modifications

Historique des modifications
VersionDateModification
v112 mars 2026Initial publication. Hazard identified through gap analysis against IASR 2026 Chapter 2.3.2 (Risks to human autonomy).
v212 mars 2026Corrected all report references against verified sources. Fixed 5 of 6 reports: Lancet endoscopy study (not Nature Medicine radiology — DOI fabricated), Gerlich/MDPI Societies (not Thinking Skills and Creativity — DOI pointed to wrong paper), Beck et al. arXiv automation bias study (not CHI — DOI pointed to different paper), Jakesch et al. CHI 2023 co-writing study (not Science — DOI fabricated), NBER w34255 ChatGPT usage (not w33894 which is about gas tax). Corrected narrative: radiologists→clinicians, tumours→adenomas, colonoscopy context. Completed FR narrative (added missing final paragraph) and harm_mechanism_fr. Removed sycophantic_output from ai_pathways. Added TBS and Health Canada entity linkages. Populated ai_involvement.

Version 2