The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Balanced AI use boosts perceived team effectiveness, but both under-use and over-reliance erode outcomes; active e-leadership — framing, validation checkpoints and human authorship — determines whether AI augments or undermines collaboration.

E-leadership and human-AI collaboration: socio-technical alignment in project-based teams
Geshwaree Huzooree · Fetched May 03, 2026 · Journal of Organizational Effectiveness
semantic_scholar descriptive low evidence 7/10 relevance DOI Source
Managers report a curvilinear 'bounded augmentation' pattern where team effectiveness peaks under balanced AI use and is mediated by proactive e-leadership practices that frame, validate and integrate AI outputs.

The rapid integration of artificial intelligence (AI) tools into project work alters how collaboration unfolds and how e-leadership is exercised, with implications for performance. This study explores how e-leadership practices shape the relationship between human-AI collaboration and perceived team effectiveness in project-based settings. A qualitative design was adopted, drawing on 34 semi-structured interviews with project managers across five UK industries. Sampling targeted managers as boundary spanners across diverse project types from site-based construction to innovation-driven squads, to capture the socio-technical alignment process. Data were analyzed thematically using a Gioia-informed approach to identify how e-leadership practices interact with varying orientations of AI integration. The analysis identifies a curvilinear pattern of bounded augmentation, where effectiveness peaks in a zone of balanced use but declines under under-use and over-reliance. This trajectory is governed by e-leadership practices. Proactive engagement combined with creation-oriented use generated the highest effectiveness, while reactive approaches paired with automation or creation produced breakdowns. These dynamics are synthesized in a e-leadership-AI orientation matrix mapping how social (leadership engagement, trust, ownership, mediation and alignment) and technical (automation, creation, reliability, distraction and integration) subsystems combine to enable or erode team effectiveness. To achieve balanced augmentation, leaders must proactively frame AI's role, embedding validation checkpoints and human authorship clauses to maintain accountability. Organizations should cultivate a culture of critical engagement with AI outputs, while e-leadership development must focus on building competencies in mediating, filtering and legitimizing AI contributions within digital workflows. The study integrates e-leadership and human-AI collaboration within a socio-technical systems lens. It refines team effectiveness theory by showing how mediators such as trust, cohesion and accountability are reshaped when AI-generated contributions enter collaboration, and by demonstrating that augmentation is bounded rather than linear.

Summary

Main Finding

AI augmentation of project teams follows a curvilinear, "bounded augmentation" pattern: team effectiveness improves as AI is adopted up to a balanced zone, then declines with both under-use and over-reliance. E-leadership practices—how leaders actively frame, mediate and integrate AI—govern this trajectory. Proactive leaders who steer creation-oriented AI use achieve the highest effectiveness; reactive leadership combined with either blind automation or unmediated creation leads to breakdowns. The study synthesizes these patterns in an e-leadership–AI orientation matrix that links social subsystems (e.g., trust, ownership, mediation) and technical subsystems (e.g., automation vs creation, reliability, integration).

Key Points

  • Bounded augmentation: effectiveness is non‑linear — low with under-use, highest at balanced augmentation, and declines with over-reliance on AI.
  • E-leadership matters: the leader’s posture (proactive vs reactive) and practices (framing, validation, accountability enforcement) moderate AI’s impact on team outcomes.
  • Best configuration: Proactive engagement + creation-oriented AI use (AI as collaborative generator, with leader-mediated validation) yields peak effectiveness.
  • Failure modes: Reactive leadership paired with automation or unvetted creation causes errors, trust erosion, and coordination breakdowns.
  • Socio-technical matrix: effectiveness depends on combinations of social factors (engagement, trust, ownership, mediation, alignment) and technical factors (automation vs creation, reliability, distraction, integration).
  • Practical safeguards: validation checkpoints, human authorship clauses, explicit accountability lines, and routines for critical review of AI outputs.
  • Theoretical contribution: reframes team effectiveness and human-AI collaboration through a socio-technical lens and demonstrates that augmentation effects are bounded rather than monotonic.

Data & Methods

  • Design: Qualitative study using 34 semi-structured interviews.
  • Participants: Project managers across five UK industries (site-based construction to innovation-driven squads), sampled for boundary-spanning roles across diverse project types.
  • Analytical approach: Thematic analysis informed by the Gioia methodology to build first-order concepts, second-order themes, and aggregate dimensions; identified patterns of AI orientation and corresponding e-leadership practices.
  • Outcome measurement: Perceived team effectiveness reported by interviewed managers; dynamics inferred via participants’ accounts rather than quantitative performance metrics.

Implications for AI Economics

  • Nonlinear productivity effects: AI adoption yields increasing returns up to a point, then diminishing and negative returns if governance and leadership do not adapt—economic models should allow for curvilinear adoption-payoff relationships rather than assuming monotonic productivity gains.
  • Complementarity and task reallocation: Demand shifts toward managerial and e-leadership skills that mediate AI outputs (filtering, legitimizing, integrating). This implies a skills premium for boundary-spanning leaders and complementarities between AI and managerial labor.
  • Investment priorities: Firms should invest not only in AI tools but also in e-leadership training, validation infrastructures, and integration processes. The returns to such investments may be large because leadership determines whether AI yields positive or negative net productivity.
  • Organizational heterogeneity and strategy: Optimal AI deployment is context-dependent (project type, reliability needs, integration complexity). Firm-level heterogeneity in e-leadership capacity can explain cross-firm differences in AI adoption payoffs.
  • Measurement and empirical research agenda:
    • Quantify the curvilinear relationship between AI use intensity and objective productivity/team outcomes.
    • Measure e-leadership practices as moderators (indexes for proactive framing, mediation routines, accountability clauses).
    • Use panel data, field experiments, or quasi-experiments to identify causal effects and tipping points where additional AI reduces effectiveness.
    • Disaggregate AI orientation into automation vs creation and assess heterogeneous impacts across tasks and industries.
  • Policy and governance: Regulations or standards requiring human-in-the-loop validation, disclosure of AI-assisted outputs (human authorship clauses), and accountability protocols could reduce negative externalities from over-reliance.
  • Labor-market implications: Potential upskilling demand for leaders and project managers who can mediate AI, and a possible relative decline in roles susceptible to unmediated automation; wage and employment models should incorporate changing returns to coordination and mediation skills.

Actionable takeaway for economists and managers: incorporate leadership capacity and governance into cost-benefit analyses of AI adoption; model nonlinearity and complementarities explicitly; prioritize investments in human mediation capabilities to realize the productivity potential of AI without triggering the downsides of over-reliance.

Assessment

Paper Typedescriptive Evidence Strengthlow — Findings are based on qualitative, self-reported interview data from 34 managers and describe perceived relationships and mechanisms rather than estimating causal effects or measuring objective performance outcomes; therefore the evidence is strong for rich, contextual insight but weak for causal inference or magnitude. Methods Rigormedium — The study uses purposive sampling across five industries and a Gioia-informed thematic approach, which are appropriate and well-regarded methods for theory-building qualitative work; however, it relies on single-informant perceptions, lacks objective performance measures or triangulation, and cannot rule out selection or social-desirability biases. Sample34 semi-structured interviews with project managers (boundary spanners) across five UK industries, covering diverse project types from site-based construction to innovation-driven squads; purposive sampling targeted managers rather than frontline staff or non-manager stakeholders. Themeshuman_ai_collab org_design GeneralizabilitySmall, non-random sample limits statistical generalizability, UK-only context may not apply to other national or regulatory environments, Managers-only sample excludes frontline workers, clients, and other stakeholders, Findings reflect perceived team effectiveness, not measured productivity or economic outcomes, Industries sampled may not represent all sectors (e.g., heavy manufacturing, services with different AI maturities)

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
A qualitative design was adopted, drawing on 34 semi-structured interviews with project managers across five UK industries. Other null_result high study_design_and_sample
n=34
0.3
Analysis identifies a curvilinear pattern of bounded augmentation, where effectiveness peaks in a zone of balanced use but declines under under-use and over-reliance. Team Performance mixed high perceived team effectiveness
n=34
0.18
The trajectory of the curvilinear relationship is governed by e-leadership practices. Team Performance positive high perceived team effectiveness (as moderated by e-leadership)
n=34
0.18
Proactive engagement combined with creation-oriented use generated the highest effectiveness. Team Performance positive high perceived team effectiveness
n=34
0.18
Reactive approaches paired with automation or creation produced breakdowns (reduced effectiveness). Team Performance negative high perceived team effectiveness (breakdowns)
n=34
0.18
Social (leadership engagement, trust, ownership, mediation and alignment) and technical (automation, creation, reliability, distraction and integration) subsystems combine to enable or erode team effectiveness, summarized in an e-leadership–AI orientation matrix. Team Performance mixed high perceived team effectiveness (as a function of social and technical subsystems)
n=34
0.09
To achieve balanced augmentation, leaders must proactively frame AI's role, embedding validation checkpoints and human authorship clauses to maintain accountability. Team Performance positive high accountability / balanced augmentation (implied improvement in team effectiveness)
n=34
0.03
Organizations should cultivate a culture of critical engagement with AI outputs, and e-leadership development must focus on building competencies in mediating, filtering and legitimizing AI contributions within digital workflows. Organizational Efficiency positive high organizational practices / e-leadership competencies (intended to improve team/organizational outcomes)
n=34
0.03
Mediators such as trust, cohesion and accountability are reshaped when AI-generated contributions enter collaboration. Team Performance mixed high trust, cohesion, accountability
n=34
0.18
Augmentation is bounded rather than linear (i.e., human-AI augmentation shows diminishing or negative returns past a balanced zone). Team Performance mixed high perceived team effectiveness as a function of AI-use intensity
n=34
0.18

Notes