The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI is reshaping knowledge work by augmenting human roles rather than simply replacing them, forcing managers to rebalance autonomy, capability development and ethical governance; organizations confront trade-offs between decentralized human judgment and centralized orchestration as dependency on predictive and generative AI grows.

Reimagining work in the age of intelligent automation: a qualitative inquiry into AI-augmented workforce dynamics and managerial redesign
Isabel Barbosa, Elizabeth Real de Oliveira · May 13, 2026 · Journal of Modelling in Management
openalex descriptive low evidence 7/10 relevance DOI Source
Based on interviews across 12 firms, the paper argues that AI integration drives co-adaptive changes—technological alignment, cognitive calibration and ethical anchoring—that produce an 'augmented work agency' and surface tensions between autonomy vs orchestration, capability vs dependency, and experimentation vs ethics.

Purpose This paper aims to examine how artificial intelligence (AI) reshapes contemporary work by augmenting, rather than substituting, human roles, engaging explicitly with substitution, augmentation and co-evolutionary perspectives on AI and the future of work. It introduces the concept of augmented work agency to refine sociotechnical debates on agency, control and coordination in AI-mediated settings and investigates how AI integration transforms managerial practices, workforce identities and organizational coordination, within evolving infrastructures that combine predictive and generative AI. Design/methodology/approach A qualitative interpretivist research design was used, drawing on semistructured interviews with 28 managers and professionals from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia. Using thematic and interpretive analysis, supported by organizational document review, the study identifies patterns of adaptation in organizations implementing AI at strategic and operational levels, paying particular attention to experiences of technostress, anxiety and the micro-political negotiation of AI tools in everyday work. Findings This study develops an emergent framework of AI–human co-adaptation, illustrating how cognitive, relational and structural changes accompany AI integration across three interrelated dimensions: technological alignment, cognitive calibration and ethical anchoring. It uncovers three central tensions − autonomy versus orchestration, capability versus dependency and experimentation versus ethics − that collectively shape the evolving dynamics of AI-mediated work and condition how organizations navigate competing priorities while fostering productive human–AI collaboration, and how employees experience and contest AI integration through forms of individual and collective agency. Originality/value This paper advances understanding of augmented workforce design by introducing the concept of augmented work agency as a multi-level, interpretive form of human agency in algorithmically mediated environments, extending sociotechnical systems, algorithmic management and institutional-logics perspectives on agency, control and coordination. It conceptualizes AI as a co-evolving organizational capability rather than a deterministic technology and shows how augmented work agency is shaped by generative and non-generative AI applications, employees’ experiences of anxiety and technostress and the micro-politics through which teams and employee groups negotiate the boundaries of AI use and AI ethics. The study offers actionable insights for leaders seeking to balance innovation, capability development and ethical governance in AI-enabled workplaces while sustaining human interpretive authority, accountability and responsibility over time.

Summary

Main Finding

AI integration in contemporary workplaces primarily augments—not simply substitutes—human roles by producing a co-evolutionary process of AI–human adaptation. The paper introduces "augmented work agency" as a multi-level interpretive form of human agency in algorithmically mediated settings and shows that organizational outcomes of AI depend on technological alignment, cognitive calibration, and ethical anchoring. Three central tensions (autonomy vs orchestration; capability vs dependency; experimentation vs ethics) shape how organizations realize productivity, skill, and governance trade-offs.

Key Points

  • Conceptual contribution:
    • Introduces "augmented work agency" to capture how individuals and groups negotiate, contest and exercise interpretive authority, accountability and responsibility in AI-mediated work.
    • Frames AI as a co-evolving organizational capability rather than a deterministic force; outcomes depend on managerial practices, workforce identities and institutional politics.
  • Three interrelated dimensions of AI–human co-adaptation:
    • Technological alignment: matching AI affordances (predictive/generative) with workflows, interfaces, data pipelines and coordination practices.
    • Cognitive calibration: how workers adapt mental models, trust, confidence and routines to interpret and use AI outputs (including re-skilling and changing expertise boundaries).
    • Ethical anchoring: institutionalizing norms, rules and micro-political negotiations that delimit acceptable AI use, responsibility and accountability.
  • Three central tensions that mediate outcomes:
    • Autonomy vs orchestration: trade-offs between worker discretion and centralized AI-driven coordination.
    • Capability vs dependency: gains in worker capability contrasted with risks of over-reliance and skill erosion.
    • Experimentation vs ethics: pressure to innovate quickly versus need to embed ethical safeguards and governance.
  • Affective and political dynamics:
    • Technostress, anxiety and micro-politics are core to how employees experience and contest AI—these affect uptake, productive use and the distribution of gains.
  • Practical insight:
    • Leaders must balance innovation, capability development and ethical governance to sustain human interpretive authority over time.

Data & Methods

  • Design: Qualitative interpretivist study.
  • Sample: 28 semi-structured interviews with managers and professionals drawn from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia.
  • Complementary data: Organizational documents and artifacts reviewed to contextualize practices and policies.
  • Analytical approach: Thematic and interpretive analysis to identify patterns of adaptation, experiences of technostress/anxiety, and micro-political negotiation of AI tools across strategic and operational levels.
  • Output: An emergent framework of AI–human co-adaptation (technological alignment, cognitive calibration, ethical anchoring) and identification of three core tensions shaping AI-mediated work.

Implications for AI Economics

  • Rethink substitution-centric models:
    • Standard models that treat AI as a pure labor substitute understate complementarities. Economic models should incorporate augmentation channels (productivity-enhancing complementarities between AI and human skills).
  • Labor demand and skill composition:
    • Expect heterogeneous changes in task content—greater demand for interpretive, coordination, governance and AI-supervisory skills; potential hollowing of routine cognitive tasks but not wholesale job loss.
    • Wage effects will depend on who captures augmented productivity (organizations, managers, or workers) and whether skills are scarce; incorporate bargaining and firm-level allocation mechanisms.
  • Productivity measurement:
    • Productivity gains from AI are contingent on organizational alignment, training, and governance; cross-firm heterogeneity likely large. Micro-level measures (task-level output, time use, error rates, decision quality) are important.
  • Human capital investment:
    • Firms and workers face incentives to invest in cognitive calibration (training, reskilling). Models should include adjustment costs due to technostress, learning curves and micro-political frictions.
  • Organizational capital and governance as economic inputs:
    • Treat ethical anchoring and managerial practices as inputs (or constraints) affecting the returns to AI investment. Compliance, deliberation and slower experimentation can lower short-run adoption but preserve long-run value and legitimacy.
  • Adoption dynamics and diffusion:
    • Adoption is shaped by affective responses, micro-political negotiation and sectoral institutional contexts—implying path dependence and non-linear diffusion. Include social/organizational frictions in diffusion models.
  • Policy and regulation:
    • Policies that only subsidize AI capital without addressing governance, training and workplace voice may produce uneven outcomes. Support for workforce training, norms for accountability, and measurement of AI impacts (including technostress) will influence aggregate welfare.
  • Empirical suggestions for economists:
    • Data to collect: firm-level AI type (predictive vs generative), intensity of use, managerial coordination modes, training investments, measures of technostress/anxiety, work design changes, worker bargaining power, performance metrics and turnover.
    • Research designs: combine matched employer-employee panels, case studies, difference-in-differences around AI rollouts, instrumenting adoption with exogenous variation (e.g., vendor availability, policy shocks), and longitudinal qualitative follow-ups to capture co-adaptation dynamics.
    • Key outcomes to estimate: task reallocation, wage premiums/penalties by skill, productivity conditional on governance inputs, heterogeneity of adoption returns across firm types and institutional contexts.
  • Broader economic modeling:
    • Incorporate augmented work agency as a mechanism linking technology adoption to realized productivity and distributional outcomes. Modeling should allow firm-level endogenous governance and worker agency to shape returns to AI investments.

Overall, the paper argues that economists should model AI as an organizational capability whose returns depend critically on human–AI coordination, governance and affective responses—not merely on capital substitution—implying richer microdata and models to predict labor market and productivity effects.

Assessment

Paper Typedescriptive Evidence Strengthlow — Findings are based on a small, purposive qualitative sample of interviews and document review without counterfactuals or causal identification; evidence is interpretive and suited to theory-building rather than establishing causal effects on economic outcomes. Methods Rigormedium — Appropriate qualitative methods (semi-structured interviews, thematic and interpretive analysis, document review) and cross-organizational sampling (12 firms, multiple sectors and regions) support internal credibility, but the study is limited by a modest non-random sample (28 participants), potential self-selection and reporting biases, limited triangulation and lack of longitudinal data. Sample28 semi-structured interviews with managers and professionals from 12 organizations in technology, finance and knowledge-intensive service sectors across Europe and Asia, supplemented by organizational document review; purposive, interpretive qualitative sample focusing on managerial/professional perspectives. Themeshuman_ai_collab org_design governance skills_training GeneralizabilitySmall, purposive and non-random sample limits statistical generalizability, Focus on managers and professionals excludes frontline or lower-skilled workers, Sectors covered (tech, finance, knowledge services) bias toward knowledge-intensive contexts, Geographic coverage (Europe and Asia) may not capture dynamics in other regions, Cross-sectional, interview-based design limits inference about long-run co-evolutionary processes

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
AI reshapes contemporary work by augmenting, rather than substituting, human roles. Job Displacement positive high nature of human roles (augmentation vs substitution)
n=28
0.18
The paper introduces the concept of 'augmented work agency' as a multi-level, interpretive form of human agency in algorithmically mediated environments. Organizational Efficiency null_result high agency, control and coordination in algorithmic workplaces
n=28
0.03
AI integration transforms managerial practices, workforce identities and organizational coordination. Organizational Efficiency mixed high managerial practices, workforce identities, organizational coordination
n=28
0.18
The study develops an emergent framework of AI–human co-adaptation comprising three interrelated dimensions: technological alignment, cognitive calibration and ethical anchoring. Organizational Efficiency null_result high dimensions of AI–human co-adaptation
n=28
0.18
The analysis uncovers three central tensions shaping AI-mediated work: autonomy versus orchestration; capability versus dependency; and experimentation versus ethics. Organizational Efficiency mixed high tensions influencing dynamics of AI-mediated work
n=28
0.18
Employees experience technostress, anxiety and micro-political negotiation around AI tools in everyday work. Worker Satisfaction negative high technostress and anxiety among employees
n=28
0.18
AI should be conceptualized as a co-evolving organizational capability rather than a deterministic technology. Organizational Efficiency null_result high conceptual framing of AI within organizations
n=28
0.03
Augmented work agency is shaped by whether applications are generative or non-generative, by employees' experiences of anxiety and technostress, and by micro-politics through which teams negotiate AI use and AI ethics. Organizational Efficiency mixed high determinants shaping augmented work agency
n=28
0.18
The study offers actionable insights for leaders seeking to balance innovation, capability development and ethical governance in AI-enabled workplaces while sustaining human interpretive authority, accountability and responsibility over time. Governance And Regulation positive high guidance for leadership on balancing innovation and governance
n=28
0.03
The study used a qualitative interpretivist research design drawing on semistructured interviews with 28 managers and professionals from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia, using thematic and interpretive analysis supported by organizational document review. Other null_result high research design and sample characteristics
n=28
0.3

Notes