The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Generative AI can sharpen managerial performance — but only under emotionally intelligent leadership; without EI-driven trust calibration, AI-enabled delegation and communications risk amplifying errors and eroding team dynamics.

LEADER EMOTIONAL INTELLIGENCE IN THE GENERATIVE AI ERA: “HUMAN–AI LEADERSHIP” AS A NEW MANAGERIAL COMPETENCE
L. Golovkova, K. Hannouf · Fetched April 01, 2026 · REVIEW OF TRANSPORT ECONOMICS AND MANAGEMENT
semantic_scholar theoretical n/a evidence 7/10 relevance DOI Source
The paper argues that leader emotional intelligence moderates whether generative AI improves managerial decision quality, delegation, and communication, with high-EI leaders better able to calibrate trust and realize benefits while low-EI leaders risk over-reliance and poorer interpersonal outcomes.

Purpose. To conceptualize human–AI leadership as an integrated managerial competence and explain how leader emotional intelligence (EI) moderates decision quality, delegation, and managerial communication when generative AI tools (Copilot/ChatGPT) are used in corporate management. Methodology. The paper applies conceptual modeling that integrates EI theory, psychological safety, and trust calibration in human–AI collaboration. A “Package B” rapid empirical design is proposed: a randomized online experiment that manipulates access to GenAI in core managerial tasks (decision, delegation, team communication), combined with EI measurement and trust-calibration indicators. As a follow-up validation path, a two-wave time-lag design and 180° assessment (leader + subordinates) are proposed to reduce common-method bias. Results. An EI-moderated human–AI model is formulated. EI strengthens the positive impact of GenAI on managerial outcomes when trust is properly calibrated and psychological safety is maintained. Under low EI, the model predicts higher risks of over-reliance, emotionally detached communication, and weaker delegation quality. The paper provides an operationalization toolkit: GenAI use intensity; delegation quality indices (clarity, boundaries, success criteria); communication quality indices (empathy, tone, transparency); psychological safety markers; and behavioral trust-calibration measures. Scientific novelty. The article introduces an EI-driven trust-calibration framework as an explanatory mechanism showing when GenAI improves leadership effectiveness and when it amplifies managerial errors. Practical value. The results support corporate GenAI policies, leadership development programs, and HR assessment of leader readiness for GenAI-enabled delegation and communication.

Summary

Main Finding

Leader emotional intelligence (EI) is a key moderator of how generative AI (GenAI, e.g., Copilot/ChatGPT) affects managerial outcomes. When EI is high and leaders calibrate trust appropriately while maintaining psychological safety, GenAI use improves decision quality, delegation, and managerial communication. When EI is low, GenAI use risks over‑reliance, emotionally detached communication, and poorer delegation outcomes.

Key Points

  • Conceptual contribution: Introduces an EI-driven trust‑calibration framework that explains when GenAI augments leadership effectiveness versus when it amplifies managerial errors.
  • Mechanisms emphasized: leader EI, psychological safety in teams, and behavioral trust calibration toward GenAI outputs.
  • Predicted heterogeneous effects:
    • High EI + calibrated trust → stronger positive impact of GenAI on decisions, delegation clarity, and empathetic team communication.
    • Low EI → higher chances of over‑trust/automation bias, weaker delegation, and less emotionally attuned communications.
  • Practical toolkit proposed: operational measures for GenAI use intensity; delegation quality (clarity, boundaries, success criteria); communication quality (empathy, tone, transparency); psychological safety markers; and behavioral trust‑calibration metrics.
  • Practical relevance: informs corporate GenAI policies, leadership development, and HR assessment of leader readiness for GenAI‑enabled tasks.

Data & Methods

  • Research design: conceptual modeling integrating EI theory, psychological safety, and trust calibration.
  • Empirical proposal (“Package B” rapid design):
    • Randomized online experiment manipulating access to GenAI across core managerial tasks (decision making, delegation, team communication).
    • Measurement of leader EI and trust‑calibration indicators; outcomes include decision quality, delegation quality, and communication quality indices.
  • Follow‑up validation:
    • Two‑wave time‑lag design and 180° assessment (leader + subordinates) to reduce common‑method bias and observe temporal dynamics.
  • Operational measures recommended:
    • Treatment variables: GenAI access vs no access; intensity/frequency of GenAI use.
    • Outcome indices: delegation clarity, boundary specification, success criteria; communication empathy, tone, transparency; decision accuracy/quality metrics.
    • Moderators/mediators: EI scales, psychological safety scales, behavioral measures of trust calibration (e.g., frequency of human overrides, acceptance rates of AI suggestions).
  • Note: Paper proposes the empirical approach and toolkit; it formulates a model and predictions rather than reporting field experimental results.

Implications for AI Economics

  • Productivity and organizational capital: EI‑conditioned complementarities imply GenAI’s productivity gains depend on managerial soft skills; returns to GenAI adoption are heterogeneous across firms depending on leadership quality.
  • Labor allocation and task composition: Improved delegation quality under high‑EI leaders could reallocate manager time toward higher‑value tasks; low‑EI environments risk inefficient automation and reduced managerial effectiveness.
  • Skill bias and training investments: Findings suggest investment in EI and trust‑calibration training may be high‑return complements to GenAI adoption—affecting training budgets, HR strategy, and labor demand composition.
  • Adoption diffusion and firm heterogeneity: Firm‑level adoption benefits will vary with managerial EI stocks, creating cross‑firm dispersion in productivity responses to GenAI that macro models should account for.
  • Measurement for empirical economics: Provides concrete behavioral and survey measures that can be incorporated into firm‑level datasets to study GenAI impacts (use intensity, delegation/communication quality, psychological safety, trust calibration).
  • Policy and governance: Insights help design corporate governance and regulation (e.g., mandates for human oversight, auditing of AI-enabled managerial decisions) and inform workforce policy on upskilling for effective human–AI leadership.
  • Cost‑benefit and ROI analyses: When evaluating GenAI investments, economists should factor in complementary investments in leadership development and psychological‑safety building to realize predicted productivity gains.

Assessment

Paper Typetheoretical Evidence Strengthn/a — The manuscript is primarily conceptual and presents a formalized model plus a proposed empirical design; it does not report implemented empirical tests or causal estimates, so there is no direct evidence to evaluate. Methods Rigormedium — The proposed design is thoughtful: an RCT manipulation of GenAI access, moderation by measured EI, time-lagged follow-up and multi-rater (180°) assessment address many common threats (selection, reverse causality, common-method bias). However, key implementation details (sampling frame, incentive-compatible tasks, ecological validity, power calculations, measurement validation in field settings) are not provided and practical challenges (behavioral fidelity of online tasks, rapidly evolving GenAI) remain. SampleNo empirical sample is analyzed; the paper proposes an online randomized experiment involving managers (leaders) performing core managerial tasks with/without GenAI and their subordinates providing 180° evaluations in a two-wave design — i.e., leader participants plus subordinate raters across corporate teams, with behavioral and survey measures of EI, trust calibration, psychological safety, delegation quality, and communication indices. Themeshuman_ai_collab skills_training org_design IdentificationProposes a randomized online experiment that manipulates access to generative AI (Copilot/ChatGPT) across core managerial tasks (decision, delegation, team communication), with leader emotional intelligence (EI) measured as a moderator; includes planned two-wave time-lag data collection and 180° (leader + subordinate) assessments to reduce common-method bias and improve causal interpretation. GeneralizabilityProposed online experiment may not replicate real-world managerial pressures and stakes (limited ecological validity)., Participant pool (online recruited managers) may not represent senior corporate leaders or all sectors/functions., Rapidly evolving GenAI models mean results could be time- and model-specific., Cultural and organizational context (norms around delegation and communication) may limit transferability across countries and firm types., Measured short-term outcomes may not map cleanly to longer-run productivity or firm-level performance.

Claims (9)

ClaimDirectionConfidenceOutcomeDetails
The paper conceptualizes human–AI leadership as an integrated managerial competence. Organizational Efficiency positive high human–AI leadership competence (integrated managerial competence)
0.02
Leader emotional intelligence (EI) moderates decision quality, delegation, and managerial communication when generative AI tools (Copilot/ChatGPT) are used in corporate management. Decision Quality mixed high decision quality (and delegation quality, managerial communication)
0.02
Emotional intelligence strengthens the positive impact of generative AI on managerial outcomes when trust is properly calibrated and psychological safety is maintained. Decision Quality positive high managerial outcomes (e.g., decision quality)
0.02
Under low emotional intelligence, the model predicts higher risks of over-reliance on AI, emotionally detached communication, and weaker delegation quality. Task Allocation negative high delegation quality (and over-reliance / communication quality)
0.02
The paper proposes a 'Package B' rapid empirical design: a randomized online experiment manipulating access to generative AI in core managerial tasks (decision, delegation, team communication), combined with EI measurement and trust-calibration indicators. Research Productivity positive high experimental test of human–AI leadership effects
0.02
As a follow-up validation path, the paper proposes a two-wave time-lag design and 180° assessment (leader + subordinates) to reduce common-method bias. Research Productivity positive high robustness/validity of empirical findings (reduction of common-method bias)
0.02
The paper provides an operationalization toolkit including measures: GenAI use intensity; delegation quality indices (clarity, boundaries, success criteria); communication quality indices (empathy, tone, transparency); psychological safety markers; and behavioral trust-calibration measures. Organizational Efficiency positive high measurement constructs for empirical studies (e.g., GenAI use intensity, delegation quality, communication quality)
0.02
The article introduces an EI-driven trust-calibration framework as an explanatory mechanism showing when generative AI improves leadership effectiveness and when it amplifies managerial errors. Research Productivity positive high leadership effectiveness (and amplification of managerial errors)
0.02
The results (conceptual/model results) support corporate GenAI policies, leadership development programs, and HR assessment of leader readiness for GenAI-enabled delegation and communication. Adoption Rate positive high policy and HR adoption/application
0.02

Notes