The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI accelerates clinicians’ work in the short run but slowly dulls diagnostic intuition, causing skill atrophy and identity commoditization among cancer specialists; the authors propose a sociotechnical framework to detect and reverse such erosion while preserving human expertise.

From Future of Work to Future of Workers: Addressing Asymptomatic AI Harms to Foster Dignified Human-AI Interaction
Upol Ehsan, Samir Passi, Koustuv Saha, Todd McNutt, Mark Riedl, Sara R. Alcorn · April 13, 2026
openalex descriptive low evidence 7/10 relevance DOI Source PDF
A year-long study of cancer specialists finds that AI yields immediate productivity gains but produces gradual 'intuition rust'—erosion of expert judgment, skill atrophy, and identity commoditization—prompting a proposed sociotechnical framework to preserve expertise and worker dignity.

In the future of work discourse, AI is touted as the ultimate productivity amplifier. Yet, beneath the efficiency gains lie subtle erosions of human expertise and agency. This paper shifts focus from the future of work to the future of workers by navigating the AI-as-Amplifier Paradox: AI’s dual role as enhancer and eroder, simultaneously strengthening performance while eroding underlying expertise. We present a year-long study on the longitudinal use of AI in a high-stakes workplace among cancer specialists. Initial operational gains hid “intuition rust”: the gradual dulling of expert judgment. These asymptomatic effects evolved into chronic harms, such as skill atrophy and identity commoditization. Building on these findings, we offer a framework for dignified Human-AI interaction co-constructed with professional knowledge workers facing AI-induced skill erosion without traditional labor protections. The framework operationalizes sociotechnical immunity through dual-purpose mechanisms that serve institutional quality goals while building worker power to detect, contain, and recover from skill erosion, and preserve human identity. Evaluated across healthcare and software engineering, our work takes a foundational step toward dignified human-AI interaction futures by balancing productivity with the preservation of human expertise.

Summary

Main Finding

AI acts as an “amplifier” that simultaneously enhances short‑term performance and erodes the tacit skills, judgment, and professional identity that underlie long‑run human expertise. In a year‑long study of AI use by cancer specialists, initial productivity and operational gains concealed a slow, asymptomatic decline in clinicians’ intuition and decision skills (“intuition rust”), which progressed into chronic harms (skill atrophy, identity commoditization). The authors propose a sociotechnical framework for “dignified Human–AI interaction” that embeds dual‑purpose mechanisms to preserve institutional quality while empowering workers to detect, contain, and recover from AI‑induced skill erosion.

Key Points

  • AI‑as‑Amplifier Paradox: AI magnifies both performance and underlying vulnerabilities — it can boost throughput while weakening the human capabilities that sustain high‑stakes judgments.
  • Intuition rust: Skill degradation can be gradual and asymptomatic; users may not notice loss of tacit judgment until harms manifest.
  • Chronic harms identified:
    • Skill atrophy: erosion of domain expertise and pattern recognition built through practice.
    • Identity commoditization: professionals’ roles become standardized and commodified around AI outputs, undermining autonomy and professional dignity.
  • Sociotechnical immunity: The proposed remedy is a framework that integrates institutional quality goals with worker empowerment — mechanisms that simultaneously maintain safety/performance and preserve human expertise and agency.
  • Cross‑domain relevance: Authors evaluated and illustrated the framework in healthcare (oncology) and software engineering, suggesting the paradox and mitigation strategies generalize across high‑skill domains.

Data & Methods

  • Study design: A longitudinal investigation spanning one year focused on AI use in a high‑stakes clinical setting (cancer specialists). The paper traces behavioral and skill changes that emerged over time with routine AI integration.
  • Evidence types (as reported):
    • Longitudinal observations of workplace AI usage patterns.
    • Qualitative data documenting clinicians’ decision processes, self‑reports, and evolving interactions with AI tools.
    • Analytic comparison of outcomes, workflows, and professional practices before and after sustained AI adoption.
    • Cross‑domain evaluation applying the proposed framework to healthcare and software engineering contexts to test transferability.
  • Analytic approach: The authors synthesize qualitative longitudinal evidence to surface latent, asymptomatic effects (intuition rust), characterize downstream harms, and derive design and institutional interventions; they then map these interventions to dual technical and organizational mechanisms.

Note: The paper emphasizes longitudinal, qualitative insight into tacit skill dynamics rather than short‑term productivity metrics alone.

Implications for AI Economics

  • Human capital depreciation risk: Widespread AI adoption may temporarily raise measured productivity while reducing the accumulation and maintenance of tacit human capital, lowering long‑run labor quality and increasing dependence on AI systems.
  • Misleading productivity signals: Firms and markets that reward near‑term throughput gains risk underinvesting in practices that sustain worker expertise; measured GDP or firm performance gains could mask growing systemic fragility.
  • Labor bargaining and rent allocation: As expertise erodes and roles become commodified, workers’ bargaining power may decline, altering wage dynamics and rent distribution between capital (AI owners) and labor.
  • Task allocation and skill complementarities: The paradox complicates models of automation versus augmentation — tasks that appear complementary in the short run can become substitutes over longer horizons as human capacity atrophies.
  • Externalities and safety: Tacit‑skill erosion creates negative externalities (quality degradation, increased systemic risk) that markets may not internalize, signaling a role for regulation or industry standards.
  • Policy and firm responses:
    • Monitor tacit skill trajectories: Invest in metrics and audits that detect skill erosion (not just output).
    • Dual‑purpose interventions: Design AI deployment rules that simultaneously preserve quality and encourage skill maintenance (e.g., rotation, human oversight requirements, periodic unguided decision tasks, documentation and knowledge retention incentives).
    • Labor protections and co‑governance: Build institutional mechanisms (worker participation, review boards, retraining mandates, compensation for maintaining non‑AI skills) to preserve worker agency and long‑run human capital.
    • Rethink incentives: Align firm incentives with long‑run competence maintenance — e.g., performance metrics that reward human expertise preservation and resilience.
  • Research agenda: Develop quantitative measures of tacit skill erosion, model the long‑run macroeconomic impacts of AI‑induced human capital depreciation, and experimentally evaluate sociotechnical interventions across sectors.

Overall, the paper reframes AI’s economic role: beyond productivity amplification, AI reshapes the dynamics of human capital accumulation, agency, and bargaining — requiring coordinated sociotechnical and policy responses to avoid chronic harms that undermine long‑run economic value.

Assessment

Paper Typedescriptive Evidence Strengthlow — Findings come from a year-long qualitative/ethnographic study and cross-domain evaluation rather than from experimental or quasi-experimental identification; results are plausibly causal but not established with counterfactuals, statistical controls, or representative sampling. Methods Rigormedium — Longitudinal engagement in a high-stakes setting and mixed qualitative methods (observations, interviews, artifact analysis) provide rich temporal evidence of skill change, but the paper lacks (or does not report) systematic, generalizable sampling, quantitative measurement of skill decline, and rigorous controls for alternative explanations. SampleA year-long longitudinal qualitative study of cancer specialists using AI in a high-stakes clinical setting, supplemented by cross-domain (healthcare and software engineering) evaluation and a co‑constructed framework with professional knowledge workers; methods appear to include interviews, workplace observation, and artifact/workflow analysis (no representative survey or large administrative dataset reported). Themeshuman_ai_collab skills_training GeneralizabilitySingle-profession focus (cancer specialists) limits transferability to other occupations, High-stakes clinical context may amplify effects compared with lower-stakes settings, Unknown or small, non-representative sample size and site(s), Findings are qualitative and contextual rather than statistically generalizable, Variation in AI tools, integration modes, and organizational supports could change outcomes, One-year horizon may not capture longer-term adaptation or recovery

Claims (9)

ClaimDirectionConfidenceOutcomeDetails
AI plays a dual role as enhancer and eroder, simultaneously strengthening performance while eroding underlying expertise (the 'AI-as-Amplifier Paradox'). Skill Obsolescence mixed high preservation of underlying expertise vs. short-term performance
0.18
We conducted a year-long longitudinal study of AI use in a high-stakes workplace among cancer specialists. Other null_result high longitudinal usage and effects of AI among clinicians
0.18
Initial operational gains from AI use masked a phenomenon called 'intuition rust' — a gradual dulling of expert judgment. Decision Quality negative high expert judgment (intuition/clinical reasoning)
0.09
Asymptomatic effects of AI use evolved into chronic harms such as skill atrophy and identity commoditization among workers. Skill Obsolescence negative high skill atrophy and worker identity commoditization
0.09
We offer a framework for dignified Human-AI interaction co-constructed with professional knowledge workers facing AI-induced skill erosion without traditional labor protections. Governance And Regulation positive high design of human-AI interaction frameworks to mitigate skill erosion and protect workers
0.03
The framework operationalizes 'sociotechnical immunity' via dual-purpose mechanisms that both serve institutional quality goals and build worker power to detect, contain, and recover from skill erosion while preserving human identity. Governance And Regulation positive high mechanisms for detection/containment/recovery from skill erosion and preservation of identity
0.03
The proposed framework was evaluated across healthcare and software engineering. Adoption Rate null_result medium cross-domain evaluation (applicability/generalizability) of the framework
0.05
AI delivers initial operational/productivity gains in high-stakes work settings. Organizational Efficiency positive high operational gains / productivity
0.09
This work takes a foundational step toward dignified human-AI interaction futures by balancing productivity with the preservation of human expertise. Skill Obsolescence positive high balance between productivity and preservation of expertise
0.03

Notes