The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

A 30‑minute teamwork training makes human–agent pairs more effective: trained players delegated more tasks and used clearer strategies, and trained teams kept higher performance when the game's difficulty rose.

Teaming Up With an AI Agent: Training Humans to Develop Human-Agent Teamwork Skills
Yvonne A. Farah, Jiwon W. Kim, Amanda K. Newendorp, Ghazal Shah Abadi, Michael C. Dorneich, Stephen B. Gilbert · Fetched March 23, 2026 · Journal of Cognitive Engineering and Decision Making
semantic_scholar rct medium evidence 7/10 relevance DOI Source
A brief (<30 minute) teamwork-competency training caused human players paired with a scripted agent to delegate more tasks, use explicit strategies when assigning work, and sustain higher task performance as game difficulty increased in the cooperative game KeyWe.

This study investigated whether humans can be efficiently trained in human-agent teams (HATs) teamwork competencies to improve HAT collaboration. In HATs, humans and artificially intelligent (AI) agents collaborate on shared tasks, which requires teamwork. However, human and agent approaches to teamwork differ, posing challenges in HATs. These challenges raise the need to train humans to develop teamwork competencies that they can effectively apply in HAT settings. The cooperative video game, KeyWe, was used as a testbed, in which human participants completed tasks with a scripted agent. A HAT training intervention that took less than 30 minutes was developed to train humans on seven teamwork competencies. The training was not associated with the KeyWe game task itself. Half of the participants received the training, and half did not. Participants who received the training delegated a higher percentage of tasks to the agent and more often assigned tasks to the agent by defining strategies than participants who did not receive teamwork training. Trained teams demonstrated resilience by achieving higher task performance when the game difficulty increased. This study demonstrated that training humans to develop teamwork competencies, independent from task training, can enhance collaboration and performance in HATs.

Summary

Main Finding

A short (<30 minute), domain-general teamwork training for humans improved human–agent team (HAT) collaboration in a cooperative game testbed. Trained participants delegated more tasks to a scripted agent, used strategy-based task assignment more often, and produced higher task performance under increased difficulty, demonstrating greater resilience.

Key Points

  • Context: Human–agent teams (HATs) require teamwork competencies, but human and AI approaches differ, creating collaboration challenges.
  • Testbed: Cooperative video game KeyWe; humans worked with a scripted (non-adaptive) agent on shared tasks.
  • Intervention: A brief training (<30 minutes) teaching seven teamwork competencies; training content was independent of the game tasks (domain-general teamwork skills).
  • Design: Between-subject comparison — half of participants received the teamwork training, half did not.
  • Behavioral outcomes:
    • Trained participants delegated a higher percentage of tasks to the agent.
    • They more frequently assigned tasks by specifying strategies (i.e., higher-level coordination) rather than ad hoc actions.
  • Performance outcomes:
    • Trained teams maintained or increased task performance when game difficulty rose, indicating greater resilience.
  • Conclusion: Task-independent teamwork competency training can transfer to improved HAT coordination and robustness.

Data & Methods

  • Experimental setting: Cooperative video game KeyWe with human participants partnered with a scripted AI agent.
  • Intervention: Short (under 30 minutes) training module covering seven teamwork competencies; not tied to KeyWe mechanics.
  • Assignment: Two-group comparison (trained vs. untrained); half received the teamwork training.
  • Measures:
    • Delegation rate: proportion of tasks assigned to the agent.
    • Coordination mode: whether participants assigned tasks via strategy definitions.
    • Task performance: success metrics in the game, including under increased difficulty to test resilience.
  • Agent characteristics: Scripted, likely predictable policy (not a learning/adaptive AI).
  • Limitations (noted or implied): single-game testbed, scripted agent, no long-term follow-up reported in the summary, sample size and randomization details not provided here.

Implications for AI Economics

  • Productivity and returns to short training: Low-cost, brief teamwork training can yield measurable productivity gains in HATs, suggesting high ROI for firms that deploy human–AI teams and invest in quick, transferable human training.
  • Complementarity between humans and AI: Training that teaches humans how to delegate and set strategies increases effective complementarity — humans can better leverage agent capabilities rather than compete with them.
  • Labor demand and skill composition: Employers may shift hiring/training toward teamwork and coordination skills (soft skills that facilitate HATs), altering the mix of skills in demand vs. purely technical retraining.
  • Resilience and risk management: Trained HATs are more robust to increased task difficulty; organizations relying on human–AI collaboration may reduce failure risk under stress by investing in teamwork competencies.
  • Product and AI design: Designers should enable easy delegation and strategy-specification interfaces for humans. Agents that support or cue human strategy use may amplify training effects.
  • Policy and workforce development: Workforce programs and upskilling initiatives could include brief teamwork modules tailored for human–AI collaboration to increase employment resilience and productivity.
  • Research and evaluation needs: Economic assessment should estimate cost-benefit at scale (training costs vs. productivity gains), heterogeneity of effects across worker types and tasks, durability of training gains, and interactions with adaptive/learning AI agents in real-world settings.

Caveats: findings are from one cooperative game with a scripted agent and may not generalize across domains or to adaptive AI; further work should test scalability, long-term persistence, and heterogeneous impacts.

Assessment

Paper Typerct Evidence Strengthmedium — Randomized assignment gives strong internal validity for the causal effect of the training on in-game behaviors and short-term performance, but the experiment is confined to a single cooperative video game with a scripted agent, likely a convenience sample, short follow-up, and no real-world productivity or economic outcomes, limiting external validity and policy relevance. Methods Rigormedium — Design uses randomization and objective behavioral/performance metrics, which are rigorous for assessing causal effects; however, important methodological details are missing or unclear (sample size and composition not reported here, agent is scripted rather than a deployed ML system, potential lack of pre-registration or blinding, and limited ecological validity), so methodological rigor is solid for lab inference but not for broader generalization. SampleHuman participants (details on N, demographics, and recruitment not reported here) recruited to play the cooperative video game KeyWe, each paired with a scripted agent; participants were randomized to a <30-minute teamwork-competency training or control (no teamwork training), and in-game behaviors (delegation, strategy assignment) and task performance under varying difficulty were measured. Themeshuman_ai_collab skills_training IdentificationBetween-subjects randomized controlled trial: human participants paired with a scripted AI agent were randomly assigned to receive a <30-minute teamwork-training intervention or no training, and subsequent behavior and task performance were compared across groups. GeneralizabilityLab/game setting (KeyWe) may not map to workplace or real-world team tasks, Agent was scripted, not a deployed, adaptive AI system — limits relevance to current ML-based agents, Short-term intervention and immediate outcomes; no evidence on long-term retention or transfer, Likely convenience sample (e.g., students) — demographic and occupational diversity uncertain, Single task domain; findings may not generalize across task types, industries, or more complex HATs

Claims (7)

ClaimDirectionConfidenceOutcomeDetails
A HAT training intervention that took less than 30 minutes was developed to train humans on seven teamwork competencies. Training Effectiveness positive high training_duration_and_content (existence of <30 min training on seven competencies)
0.6
Half of the participants received the teamwork training and half did not (between-subjects comparison). Other null_result high experimental_assignment (trained vs. untrained)
1.0
Participants who received the training delegated a higher percentage of tasks to the agent than participants who did not receive teamwork training. Task Allocation positive high percentage_of_tasks_delegated_to_agent
0.6
Trained participants more often assigned tasks to the agent by defining strategies compared to participants who did not receive teamwork training. Task Allocation positive high frequency_of_strategy-based_task_assignment
0.6
Trained teams demonstrated resilience by achieving higher task performance when the game difficulty increased. Team Performance positive medium task_performance_under_increased_difficulty
0.36
Training humans to develop teamwork competencies, independent from task training, can enhance collaboration and performance in human-agent teams (HATs). Team Performance positive medium collaboration_and_performance_in_HATs (composite claim based on delegation, assignment method, and performance)
0.36
The cooperative video game KeyWe, with a scripted agent, served as a valid testbed for studying human-agent teamwork and the effects of the training intervention. Other null_result high experimental_testbed_description
0.3

Notes