AI-Driven Cognitive Deskilling and Automation Over-Reliance
Routine AI use is associated with measurable declines in critical thinking, professional competence, and error detection — effects that may undermine the human oversight AI governance depends on.
Emerging evidence indicates that routine use of AI systems for cognitive tasks can degrade users' critical thinking skills, professional competence, and ability to detect errors — a pattern described as "cognitive deskilling."
In one clinical study, clinicians who used AI-assisted colonoscopy for several months showed approximately 6 percentage points lower adenoma detection rates when the AI assistance was removed, compared to their baseline before AI exposure. The finding suggests that sustained reliance on AI support can erode professional skills that are essential when the AI is unavailable or incorrect.
A study of 666 participants found that heavier AI tool use was associated with lower self-assessed critical thinking, mediated by cognitive offloading — the tendency to delegate cognitive work to external systems rather than engaging with it directly. While cognitive offloading can improve efficiency, the research suggests it may come at the cost of maintaining the reasoning skills that underpin autonomous decision-making.
Automation bias — the tendency to over-rely on automated outputs while discounting contradictory information — compounds these effects. In a randomized experiment with 2,784 participants, participants were significantly less likely to correct erroneous AI suggestions when doing so required extra effort or when they held favorable attitudes toward AI. This pattern has been documented across domains, from aviation monitoring to medical diagnostics.
The phenomenon extends to everyday AI use. In a study of 1,506 participants, those who used an opinionated AI writing assistant had both the opinions expressed in their text and their own subsequently reported opinions shifted toward those suggested by the model — often without realizing the shift had occurred. More broadly, analysis of ChatGPT usage patterns shows that a large share of interactions involve cognitively demanding activities such as writing, problem-solving, and information-seeking — precisely the tasks where delegation to AI risks skill atrophy.
The Canadian implications are significant. Health Canada has issued guidance on AI as a medical device, but does not address the deskilling risks to clinicians who become dependent on AI-assisted diagnosis. The TBS Directive on Automated Decision-Making governs federal AI use but does not require monitoring of public servants' decision-making competence over time. Canadian educational institutions are rapidly integrating AI tools without systematic assessment of effects on student learning and skill development. If AI systems become unreliable or are withdrawn, a deskilled workforce may lack the competence to compensate.
Harms
Clinical study found that clinicians who used AI-assisted colonoscopy for several months showed approximately 6 percentage points lower adenoma detection rates when AI assistance was removed, suggesting that sustained AI use can degrade professional diagnostic skill.
Automation bias leads users to accept AI outputs uncritically, even when those outputs contain errors. This creates a reinforcing cycle: as users practice less, their ability to catch AI errors declines, increasing dependence on the system that is degrading their skill.
Evidence
6 reports
- International AI Safety Report 2026 — Chapter 2: Risks Primary source
Comprehensive evidence review of cognitive deskilling and automation over-reliance risks from general-purpose AI. Documents clinical study on clinician skill degradation (~6 percentage points in adenoma detection), critical thinking correlation study (n=666), automation bias experiment (n=2,784), and AI writing influence study.
-
Randomized experiment with 1,506 participants finding that those who used an opinionated AI writing assistant had both the opinions expressed in their text and their own subsequently reported opinions shifted toward those suggested by the model. Participants were largely unaware of the opinion shift.
-
Mixed-method study of 666 participants finding that heavier AI tool use was associated with lower self-assessed critical thinking, mediated by cognitive offloading — the tendency to delegate cognitive work to external systems rather than engaging with it directly.
-
Analysis of ChatGPT usage patterns based on ~18 billion weekly messages from ~700 million users. Finds that cognitively demanding activities — writing, problem-solving, information-seeking — constitute a large share of interactions, precisely the domains where delegation to AI risks skill atrophy.
-
Multicentre observational study finding that endoscopists' adenoma detection rate declined by approximately 6 percentage points (from ~28% to ~22%) for colonoscopies performed without AI assistance, after the introduction of AI-assisted colonoscopy. Evidence of clinician deskilling through dependence on AI decision support.
-
Randomized experiment with 2,784 participants finding that requiring corrections for flagged AI errors reduced engagement and increased the tendency to accept incorrect suggestions. Individual attitudes toward AI were the strongest predictor of performance: skeptical evaluators detected errors more effectively, while those favorable toward AI showed overreliance on automated suggestions.
Record details
Policy Recommendationsassessed
Mandatory post-deployment monitoring of human decision-making competence in safety-critical domains where AI is deployed
International AI Safety Report 2026 (Jun 1, 2026)Periodic competency testing for professionals who routinely use AI decision-support, assessing performance both with and without AI assistance
International AI Safety Report 2026 (Jun 1, 2026)AI literacy programs that teach effective AI use while maintaining independent reasoning skills
International AI Safety Report 2026 (Jun 1, 2026)Design requirements for AI systems in professional settings to include periodic user engagement prompts that counteract automation bias
International AI Safety Report 2026 (Jun 1, 2026)Editorial Assessment assessed
Studies document clinicians losing adenoma detection accuracy after months of AI-assisted colonoscopy, and AI users scoring lower on critical thinking measures. In a randomized experiment, people failed to correct AI errors when correction required effort. As AI tools spread through Canadian healthcare, public services, and education, deskilling risks creating a population less capable of detecting AI failures — precisely when oversight matters most. No Canadian regulatory framework addresses this.
Entities Involved
Related Records
- AI Confabulation in Consequential Canadian Contextsrelated
- AI Deployment in Canadian Educational Institutions with Documented Harms to Studentsrelated
- Clinical AI Systems in Canada: Deployed with Documented Evidence Gaps and Privacy Violationsrelated
Taxonomyassessed
Changelog
| Version | Date | Change |
|---|---|---|
| v1 | Mar 12, 2026 | Initial publication. Hazard identified through gap analysis against IASR 2026 Chapter 2.3.2 (Risks to human autonomy). |
| v2 | Mar 12, 2026 | Corrected all report references against verified sources. Fixed 5 of 6 reports: Lancet endoscopy study (not Nature Medicine radiology — DOI fabricated), Gerlich/MDPI Societies (not Thinking Skills and Creativity — DOI pointed to wrong paper), Beck et al. arXiv automation bias study (not CHI — DOI pointed to different paper), Jakesch et al. CHI 2023 co-writing study (not Science — DOI fabricated), NBER w34255 ChatGPT usage (not w33894 which is about gas tax). Corrected narrative: radiologists→clinicians, tumours→adenomas, colonoscopy context. Completed FR narrative (added missing final paragraph) and harm_mechanism_fr. Removed sycophantic_output from ai_pathways. Added TBS and Health Canada entity linkages. Populated ai_involvement. |