AI is reshaping knowledge work by augmenting human roles rather than simply replacing them, forcing managers to rebalance autonomy, capability development and ethical governance; organizations confront trade-offs between decentralized human judgment and centralized orchestration as dependency on predictive and generative AI grows.
Purpose This paper aims to examine how artificial intelligence (AI) reshapes contemporary work by augmenting, rather than substituting, human roles, engaging explicitly with substitution, augmentation and co-evolutionary perspectives on AI and the future of work. It introduces the concept of augmented work agency to refine sociotechnical debates on agency, control and coordination in AI-mediated settings and investigates how AI integration transforms managerial practices, workforce identities and organizational coordination, within evolving infrastructures that combine predictive and generative AI. Design/methodology/approach A qualitative interpretivist research design was used, drawing on semistructured interviews with 28 managers and professionals from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia. Using thematic and interpretive analysis, supported by organizational document review, the study identifies patterns of adaptation in organizations implementing AI at strategic and operational levels, paying particular attention to experiences of technostress, anxiety and the micro-political negotiation of AI tools in everyday work. Findings This study develops an emergent framework of AI–human co-adaptation, illustrating how cognitive, relational and structural changes accompany AI integration across three interrelated dimensions: technological alignment, cognitive calibration and ethical anchoring. It uncovers three central tensions − autonomy versus orchestration, capability versus dependency and experimentation versus ethics − that collectively shape the evolving dynamics of AI-mediated work and condition how organizations navigate competing priorities while fostering productive human–AI collaboration, and how employees experience and contest AI integration through forms of individual and collective agency. Originality/value This paper advances understanding of augmented workforce design by introducing the concept of augmented work agency as a multi-level, interpretive form of human agency in algorithmically mediated environments, extending sociotechnical systems, algorithmic management and institutional-logics perspectives on agency, control and coordination. It conceptualizes AI as a co-evolving organizational capability rather than a deterministic technology and shows how augmented work agency is shaped by generative and non-generative AI applications, employees’ experiences of anxiety and technostress and the micro-politics through which teams and employee groups negotiate the boundaries of AI use and AI ethics. The study offers actionable insights for leaders seeking to balance innovation, capability development and ethical governance in AI-enabled workplaces while sustaining human interpretive authority, accountability and responsibility over time.
Summary
Main Finding
AI integration in contemporary workplaces primarily augments—not simply substitutes—human roles by producing a co-evolutionary process of AI–human adaptation. The paper introduces "augmented work agency" as a multi-level interpretive form of human agency in algorithmically mediated settings and shows that organizational outcomes of AI depend on technological alignment, cognitive calibration, and ethical anchoring. Three central tensions (autonomy vs orchestration; capability vs dependency; experimentation vs ethics) shape how organizations realize productivity, skill, and governance trade-offs.
Key Points
- Conceptual contribution:
- Introduces "augmented work agency" to capture how individuals and groups negotiate, contest and exercise interpretive authority, accountability and responsibility in AI-mediated work.
- Frames AI as a co-evolving organizational capability rather than a deterministic force; outcomes depend on managerial practices, workforce identities and institutional politics.
- Three interrelated dimensions of AI–human co-adaptation:
- Technological alignment: matching AI affordances (predictive/generative) with workflows, interfaces, data pipelines and coordination practices.
- Cognitive calibration: how workers adapt mental models, trust, confidence and routines to interpret and use AI outputs (including re-skilling and changing expertise boundaries).
- Ethical anchoring: institutionalizing norms, rules and micro-political negotiations that delimit acceptable AI use, responsibility and accountability.
- Three central tensions that mediate outcomes:
- Autonomy vs orchestration: trade-offs between worker discretion and centralized AI-driven coordination.
- Capability vs dependency: gains in worker capability contrasted with risks of over-reliance and skill erosion.
- Experimentation vs ethics: pressure to innovate quickly versus need to embed ethical safeguards and governance.
- Affective and political dynamics:
- Technostress, anxiety and micro-politics are core to how employees experience and contest AI—these affect uptake, productive use and the distribution of gains.
- Practical insight:
- Leaders must balance innovation, capability development and ethical governance to sustain human interpretive authority over time.
Data & Methods
- Design: Qualitative interpretivist study.
- Sample: 28 semi-structured interviews with managers and professionals drawn from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia.
- Complementary data: Organizational documents and artifacts reviewed to contextualize practices and policies.
- Analytical approach: Thematic and interpretive analysis to identify patterns of adaptation, experiences of technostress/anxiety, and micro-political negotiation of AI tools across strategic and operational levels.
- Output: An emergent framework of AI–human co-adaptation (technological alignment, cognitive calibration, ethical anchoring) and identification of three core tensions shaping AI-mediated work.
Implications for AI Economics
- Rethink substitution-centric models:
- Standard models that treat AI as a pure labor substitute understate complementarities. Economic models should incorporate augmentation channels (productivity-enhancing complementarities between AI and human skills).
- Labor demand and skill composition:
- Expect heterogeneous changes in task content—greater demand for interpretive, coordination, governance and AI-supervisory skills; potential hollowing of routine cognitive tasks but not wholesale job loss.
- Wage effects will depend on who captures augmented productivity (organizations, managers, or workers) and whether skills are scarce; incorporate bargaining and firm-level allocation mechanisms.
- Productivity measurement:
- Productivity gains from AI are contingent on organizational alignment, training, and governance; cross-firm heterogeneity likely large. Micro-level measures (task-level output, time use, error rates, decision quality) are important.
- Human capital investment:
- Firms and workers face incentives to invest in cognitive calibration (training, reskilling). Models should include adjustment costs due to technostress, learning curves and micro-political frictions.
- Organizational capital and governance as economic inputs:
- Treat ethical anchoring and managerial practices as inputs (or constraints) affecting the returns to AI investment. Compliance, deliberation and slower experimentation can lower short-run adoption but preserve long-run value and legitimacy.
- Adoption dynamics and diffusion:
- Adoption is shaped by affective responses, micro-political negotiation and sectoral institutional contexts—implying path dependence and non-linear diffusion. Include social/organizational frictions in diffusion models.
- Policy and regulation:
- Policies that only subsidize AI capital without addressing governance, training and workplace voice may produce uneven outcomes. Support for workforce training, norms for accountability, and measurement of AI impacts (including technostress) will influence aggregate welfare.
- Empirical suggestions for economists:
- Data to collect: firm-level AI type (predictive vs generative), intensity of use, managerial coordination modes, training investments, measures of technostress/anxiety, work design changes, worker bargaining power, performance metrics and turnover.
- Research designs: combine matched employer-employee panels, case studies, difference-in-differences around AI rollouts, instrumenting adoption with exogenous variation (e.g., vendor availability, policy shocks), and longitudinal qualitative follow-ups to capture co-adaptation dynamics.
- Key outcomes to estimate: task reallocation, wage premiums/penalties by skill, productivity conditional on governance inputs, heterogeneity of adoption returns across firm types and institutional contexts.
- Broader economic modeling:
- Incorporate augmented work agency as a mechanism linking technology adoption to realized productivity and distributional outcomes. Modeling should allow firm-level endogenous governance and worker agency to shape returns to AI investments.
Overall, the paper argues that economists should model AI as an organizational capability whose returns depend critically on human–AI coordination, governance and affective responses—not merely on capital substitution—implying richer microdata and models to predict labor market and productivity effects.
Assessment
Claims (10)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| AI reshapes contemporary work by augmenting, rather than substituting, human roles. Job Displacement | positive | high | nature of human roles (augmentation vs substitution) |
n=28
0.18
|
| The paper introduces the concept of 'augmented work agency' as a multi-level, interpretive form of human agency in algorithmically mediated environments. Organizational Efficiency | null_result | high | agency, control and coordination in algorithmic workplaces |
n=28
0.03
|
| AI integration transforms managerial practices, workforce identities and organizational coordination. Organizational Efficiency | mixed | high | managerial practices, workforce identities, organizational coordination |
n=28
0.18
|
| The study develops an emergent framework of AI–human co-adaptation comprising three interrelated dimensions: technological alignment, cognitive calibration and ethical anchoring. Organizational Efficiency | null_result | high | dimensions of AI–human co-adaptation |
n=28
0.18
|
| The analysis uncovers three central tensions shaping AI-mediated work: autonomy versus orchestration; capability versus dependency; and experimentation versus ethics. Organizational Efficiency | mixed | high | tensions influencing dynamics of AI-mediated work |
n=28
0.18
|
| Employees experience technostress, anxiety and micro-political negotiation around AI tools in everyday work. Worker Satisfaction | negative | high | technostress and anxiety among employees |
n=28
0.18
|
| AI should be conceptualized as a co-evolving organizational capability rather than a deterministic technology. Organizational Efficiency | null_result | high | conceptual framing of AI within organizations |
n=28
0.03
|
| Augmented work agency is shaped by whether applications are generative or non-generative, by employees' experiences of anxiety and technostress, and by micro-politics through which teams negotiate AI use and AI ethics. Organizational Efficiency | mixed | high | determinants shaping augmented work agency |
n=28
0.18
|
| The study offers actionable insights for leaders seeking to balance innovation, capability development and ethical governance in AI-enabled workplaces while sustaining human interpretive authority, accountability and responsibility over time. Governance And Regulation | positive | high | guidance for leadership on balancing innovation and governance |
n=28
0.03
|
| The study used a qualitative interpretivist research design drawing on semistructured interviews with 28 managers and professionals from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia, using thematic and interpretive analysis supported by organizational document review. Other | null_result | high | research design and sample characteristics |
n=28
0.3
|