Software engineers treat AI as an intellectual collaborator rather than a social teammate, expecting fewer socio-emotional attributes from models and instead valuing functional capabilities like contextual adaptation and responsibility negotiation; the authors argue teams should design for 'functional equivalents' rather than mimic human socio-emotional intelligence.
As GenAI models are adopted to support software engineers and their development teams, understanding effective human-AI collaboration (HAIC) is increasingly important. Socio-emotional intelligence (SEI) enhances collaboration among human teammates, but its role in HAIC remains unclear. Current AI systems lack SEI capabilities that humans bring to teamwork, creating a potential gap in collaborative dynamics. In this study, we investigate how software practitioners perceive the socio-emotional gap in HAIC and what capabilities AI systems require for effective collaboration. Through semi-structured interviews with 10 practitioners, we examine how they think about collaborating with human versus AI teammates, focusing on their SEI expectations and the AI capabilities they envision. Results indicate that practitioners currently view AI models as intellectual teammates rather than social partners and expect fewer SEI attributes from them than from human teammates. However, they see the socio-emotional gap not as AIs failure to exhibit SEI traits, but as a functional gap in collaborative capabilities (AIs inability to negotiate responsibilities, adapt contextually, or maintain sustained partnerships). We introduce the concept of functional equivalents: technical capabilities (internal cognition, contextual intelligence, adaptive learning, and collaborative intelligence) that achieve collaborative outcomes comparable to human SEI attributes. Our findings suggest that effective collaboration with AI for SE tasks may benefit from functional design rather than replicating human SEI traits for SE tasks, thereby redefining collaboration as functional alignment.
Summary
Main Finding
Software practitioners regard GenAI teammates primarily as intellectual (task-focused) collaborators, not social partners. They expect fewer socio-emotional intelligence (SEI) attributes from AI and view the “socio-emotional gap” less as a failure to emulate human affect and more as a gap in functional collaborative capabilities. The paper introduces "functional equivalents"—technical capacities (internal cognition, contextual intelligence, adaptive learning, collaborative intelligence)—that can reproduce the collaborative outcomes of human SEI. Designing for these functional equivalents (functional alignment) may be more effective for engineering teamwork than attempting to replicate human socio-emotional traits.
Key Points
- Practitioners distinguish human vs. AI teammates by role:
- Humans: intellectual + social partners (trust, negotiation, long-term partnership).
- AIs: primarily intellectual contributors (code generation, suggestions).
- SEI expectations are lower for AI: practitioners do not demand empathy, affect, or conventional social behaviors from AI the way they do from humans.
- The socio-emotional gap is reframed as a functional gap: AIs lack abilities to negotiate responsibilities, adapt to evolving team contexts, sustain partnerships across time, and coordinate social processes.
- Functional equivalents proposed:
- Internal cognition: model self-awareness about capabilities, limitations, uncertainty.
- Contextual intelligence: sense and use team- and project-specific context (codebase, norms, priorities).
- Adaptive learning: update based on feedback across interactions and evolve behavior to team needs.
- Collaborative intelligence: negotiate, coordinate tasks, and integrate with human workflows and social norms.
- Design implication: prioritize building these functional capabilities rather than anthropomorphic SEI features to improve HAIC outcomes in software engineering tasks.
- Small-scale, qualitative evidence: findings based on semi-structured interviews with 10 software practitioners.
Data & Methods
- Method: Qualitative study using semi-structured interviews.
- Sample: 10 software practitioners (roles not exhaustively enumerated in the summary).
- Procedure: interviews explored perceptions of collaborating with human vs. AI teammates, SEI expectations, and desired AI capabilities.
- Analysis: Thematic coding and interpretation to identify practitioner heuristics and the emergent concept of "functional equivalents."
- Limitations: small sample size and qualitative design limit external generalizability; findings are exploratory and hypothesis-generating.
Implications for AI Economics
- Productivity and task-allocation
- Functional capabilities (contextual intelligence, adaptive learning, collaborative intelligence) are likely the key drivers of productivity gains, not human-like socio-emotional behavior per se.
- Economies of complementarities: AIs that provide reliable functional equivalents will complement human engineers (augmenting productivity) rather than simply substituting tasks that require social negotiation or long-term coordination.
- Labor demand and skill composition
- Demand shifts toward skills that manage, integrate, and coordinate with AI (designing workflows, monitoring AI, handling ambiguous coordination problems).
- Potential for skill-biased technological change: premium on workers who can elicit, govern, and adapt AI’s functional capabilities.
- Adoption and diffusion
- Firms’ adoption decisions will favor tools demonstrating measurable functional alignment with team workflows (context awareness, adaptive performance) over anthropomorphic interfaces.
- Procurement and purchasing criteria should prioritize metrics of functional effectiveness (reduction in coordination costs, latency in task handoffs, error rates), affecting market competition and pricing.
- Product design, valuation, and investment
- R&D and product strategy should invest in feature sets that provide functional equivalence to human SEI (e.g., context-aware models, online adaptation, negotiation APIs).
- Valuation of AI products should be based on their contribution to team-level outcomes (throughput, rework reduction, debugging time), enabling clearer ROI calculations for firms.
- Firm organization and workflow design
- Organizations may redesign roles and incentive systems to leverage AI as persistent intellectual partners while human teammates handle social and contractual coordination.
- Contracting and liability frameworks need to recognize AI’s role in task allocation and decision support (who is responsible when coordination fails).
- Measurement and externalities
- Metrics for AI benefit should capture collaborative outcomes (e.g., time-to-merge, incidence of miscommunication, rework) rather than user-reported "likeability" or anthropomorphic measures.
- Externalities: widely deployed AIs lacking functional equivalents could increase coordination frictions across firms/teams, while well-aligned systems could reduce search/coordination costs in labor markets.
- Policy and regulation
- Regulatory focus may shift toward standards for collaborative capability (transparency about limits, update behavior, negotiation/logging interfaces) and safety in coordination-sensitive tasks.
- Future empirical research directions (for economists)
- Quantify productivity gains from functional-equivalent capabilities using firm- or team-level experiments.
- Estimate complementarities between AI features and human skills; identify which human roles are most complemented vs. substituted.
- Study adoption dynamics: which product features drive uptake and willingness-to-pay among engineering teams.
- Welfare analysis: distributional impacts across worker types, and implications for training and labor-market policy.
Short takeaway: For AI economics, the practical value of GenAI in software engineering will hinge more on building measurable, context-aware, adaptive collaborative capabilities than on making AIs socially human-like. This restructures how firms invest in, value, and adopt AI tools and reshapes labor demand toward coordination and governance skills.
Assessment
Claims (10)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Socio-emotional intelligence (SEI) enhances collaboration among human teammates. Team Performance | positive | high | quality of collaboration among human teammates |
0.18
|
| Current AI systems lack SEI capabilities that humans bring to teamwork, creating a potential gap in collaborative dynamics. Team Performance | negative | high | presence of SEI capabilities in AI systems (vs. humans) |
0.09
|
| This study uses semi-structured interviews with 10 practitioners to examine perceptions of collaborating with human versus AI teammates. Other | null_result | high | methodological description (data collection approach) |
n=10
0.3
|
| Practitioners currently view AI models as intellectual teammates rather than social partners and expect fewer SEI attributes from them than from human teammates. Team Performance | negative | high | practitioners' expectations of SEI attributes in AI versus human teammates |
n=10
0.18
|
| Practitioners see the socio-emotional gap not as AI's failure to exhibit SEI traits, but as a functional gap in collaborative capabilities. Task Allocation | mixed | high | framing of the AI–human socio-emotional gap (functional vs. emotional) |
n=10
0.18
|
| Practitioners identified specific functional deficiencies in AI: inability to negotiate responsibilities. Task Allocation | negative | high | AI capability to negotiate responsibilities in teamwork |
n=10
0.18
|
| Practitioners identified specific functional deficiencies in AI: inability to adapt contextually. Organizational Efficiency | negative | high | AI capability for contextual adaptation in collaborative work |
n=10
0.18
|
| Practitioners identified specific functional deficiencies in AI: inability to maintain sustained partnerships. Team Performance | negative | high | AI capability to maintain sustained collaborative partnerships |
n=10
0.18
|
| The authors introduce the concept of 'functional equivalents': technical capabilities (internal cognition, contextual intelligence, adaptive learning, and collaborative intelligence) that achieve collaborative outcomes comparable to human SEI attributes. Team Performance | positive | high | ability of technical capabilities to achieve collaborative outcomes comparable to human SEI |
0.03
|
| Effective collaboration with AI for software engineering (SE) tasks may benefit from functional design rather than replicating human SEI traits, thereby redefining collaboration as functional alignment. Team Performance | positive | high | effectiveness of human-AI collaboration in SE tasks |
n=10
0.03
|