The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI tools are reshaping who holds agency in software engineering: senior engineers preserve control through detailed delegation and mentorship, while juniors swing between dependence and avoidance; organizational policies, not personal preference, are the main constraint on agency.

From Junior to Senior: Allocating Agency and Navigating Professional Growth in Agentic AI-Mediated Software Engineering
Dana Feng, Bhada Yun, April Wang · April 13, 2026
openalex descriptive low evidence 7/10 relevance DOI Source PDF
Experienced developers retain agency by using detailed delegation and mentorship to steer AI tools, while juniors oscillate between over-reliance and cautious avoidance, and organizational policies more than individual preferences determine who holds agency.

Juniors enter as AI‑natives, seniors adapted mid‑career. AI is not just changing how engineers code—it is reshaping who holds agency across work and professional growth. We contribute junior–senior accounts on their usage of agentic AI through a three-phase mixed-methods study: ACTA combined with a Delphi process with 5 seniors, an AI-assisted debugging task with 10 juniors, and blind reviews of junior prompt histories by 5 more seniors. We found that agency in software engineering is primarily constrained by organizational policies rather than individual preferences, with experienced developers maintaining control through detailed delegation while novices struggle between over-reliance and cautious avoidance. Seniors leverage pre-AI foundational instincts to steer modern tools and possess valuable perspectives for mentoring juniors in their early AI-encouraged career development. From synthesis of results, we suggest three practices that focus on preserving agency in software engineering for coding, learning, and mentorship, especially as AI grows increasingly autonomous.

Summary

Main Finding

AI is reshaping agency in software engineering: organizational policies, not individual preferences, primarily constrain who can exercise control. Senior developers preserve agency by delegating precisely and leveraging pre-AI instincts; junior developers—who enter as “AI‑natives”—tend to oscillate between over-reliance and avoidance. Seniors’ perspectives are crucial for mentoring and for preserving human control as AI becomes more autonomous.

Key Points

  • Agency shifts are sociotechnical: organizational rules and norms (access, review, responsibility) matter more than individual tastes.
  • Seniors adapt mid-career by learning to specify and steer agentic AI, maintaining authority through detailed delegation and oversight.
  • Juniors are comfortable using AI tools but struggle with when to trust them, leading to two failure modes: over-reliance (accepting unsafe recommendations) and cautious avoidance (under-using valuable automation).
  • Seniors’ foundational instincts (debugging heuristics, risk judgment, design thinking) remain valuable and complementary to AI capabilities.
  • Blind review of junior prompt histories shows seniors can identify errors and mentoring opportunities, indicating concrete channels for knowledge transfer.
  • The authors propose three practice areas oriented to preserving agency: (1) coding (designing delegation patterns and checks), (2) learning (structured curricula/feedback on AI use), and (3) mentorship (senior-guided prompt design and review).

Data & Methods

  • Mixed-methods, three-phase design:
  • ACTA (Applied Cognitive Task Analysis) paired with a Delphi process involving 5 senior developers to surface expert judgments about AI’s impact on tasks and agency.
  • An AI-assisted debugging task with 10 junior developers to observe in-situ interactions and failure modes while using agentic tools.
  • Blind reviews: 5 additional seniors evaluated anonymized junior prompt histories to assess common errors and mentoring leverage points.
  • Triangulation: qualitative interviews/ACTA, controlled task observation, and external expert review to synthesize behavioral patterns and normative recommendations.
  • Sample size and domain: small, focused on software engineering practitioners; emphasis on depth and triangulation rather than broad representativeness.

Implications for AI Economics

  • Skill-biased augmentation and returns to experience:
    • Seniors’ ability to steer AI suggests complementarity between experience and AI—potentially increasing wage premiums for experienced engineers who can preserve and exercise agency.
    • Juniors’ over-reliance risks deskilling if organizations don’t invest in guided learning, shifting returns from hands-on coding skills toward meta-skills (prompting, oversight).
  • Organizational policy as a key margin:
    • Firm-level rules (access, review requirements, accountability structures) will shape who captures productivity gains from AI. Policies that centralize control can concentrate rents; decentralized, mentorship-oriented policies distribute benefits and human capital accumulation.
  • Labor supply, training, and mobility:
    • Investment in mentorship and structured AI-use training can accelerate junior human capital accumulation, affecting career ladders, hiring demand for senior mentors, and the pace of automation-induced displacement.
  • Productivity measurement and task allocation:
    • Economic models should account for task-level delegation patterns and the value of “steering” skills; output measures must separate raw code generation from quality, maintainability, and safety that seniors protect.
  • Policy/regulatory considerations:
    • Encouraging practices that preserve human agency (audit trails, review protocols, required human-in-the-loop for critical code) could mitigate risks of over-automation and unequal gains.
  • Research implications:
    • Need for larger-scale, longitudinal studies linking changes in AI-use practices to wages, promotion rates, and firm performance to quantify distributional effects and dynamic returns to experience.

Limitations to keep in mind: small, domain-specific sample; qualitative emphasis limits causal claims about labor-market outcomes—follow-up empirical work is needed to measure macroeconomic impacts.

Assessment

Paper Typedescriptive Evidence Strengthlow — The study is qualitative and descriptive with very small, convenience samples (5 seniors in Delphi, 10 juniors in a lab debugging task, 5 seniors reviewing prompts). There is no causal identification strategy, no counterfactual, and findings are based on triangulated impressions rather than statistical inference, so the evidence supports plausibility and hypotheses rather than generalizable causal claims. Methods Rigormedium — Rigor is bolstered by a mixed-methods design and triangulation (ACTA + Delphi + task + blind reviews) and by using expert senior perspectives, but is limited by tiny sample sizes, potential selection and confirmation biases, lack of pre-registration or standardized measurement, unclear sampling frame, and limited ecological validity of the lab task. SampleConvenience sample of software engineers: 5 senior engineers participated in a Delphi process, 10 junior engineers completed an AI-assisted debugging task, and a separate set of 5 senior engineers performed blind reviews of junior prompt histories; demographic, organizational, geographic, and tool-specific details not reported. Themeshuman_ai_collab skills_training org_design GeneralizabilityVery small sample size limits statistical generalizability, Convenience sampling likely non-representative of broader developer populations, Single domain (software engineering) — not generalizable to other occupations, Unknown geographic/cultural context and organizational settings, Findings may depend on specific AI tools and task design (lab vs real-world), Definitions of 'junior' and 'senior' may vary across firms and countries

Claims (7)

ClaimDirectionConfidenceOutcomeDetails
Juniors enter as AI‑natives, seniors adapted mid‑career. Skill Acquisition positive high Whether developers began their careers with AI tools (AI-native status) versus adopted them mid-career
n=20
0.09
AI is not just changing how engineers code—it is reshaping who holds agency across work and professional growth. Task Allocation mixed high Distribution of agency (decision-making control) across roles and career development
n=20
0.09
We contribute junior–senior accounts on their usage of agentic AI through a three-phase mixed-methods study: ACTA combined with a Delphi process with 5 seniors, an AI-assisted debugging task with 10 juniors, and blind reviews of junior prompt histories by 5 more seniors. Other null_result high Study design / data collection approach (ACTA + Delphi; task experiment; blind reviews)
n=20
0.3
Agency in software engineering is primarily constrained by organizational policies rather than individual preferences. Governance And Regulation negative high Primary source of constraint on developer agency (organizational policy vs individual preference)
n=20
0.09
Experienced developers maintain control through detailed delegation while novices struggle between over-reliance and cautious avoidance. Automation Exposure mixed high Control over AI tools (detailed delegation) vs patterns of novice behavior (over-reliance or avoidance)
n=20
0.09
Seniors leverage pre-AI foundational instincts to steer modern tools and possess valuable perspectives for mentoring juniors in their early AI-encouraged career development. Training Effectiveness positive high Seniors' ability to direct AI tools based on prior foundations and their perceived mentoring value
n=10
0.09
From synthesis of results, we suggest three practices that focus on preserving agency in software engineering for coding, learning, and mentorship, especially as AI grows increasingly autonomous. Governance And Regulation positive high Recommended practices intended to preserve developer agency
0.03

Notes