The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Trust in AI is not an individual attitude but a socially negotiated, situational process embedded in team routines, with transparency, role clarity and feedback determining whether AI is used as an augmenting complement or an overlooked substitute. Poorly calibrated trust reshapes oversight, erodes informal coordination and shifts organizations toward metric-driven evaluations—so design, training and governance matter as much as algorithmic accuracy.

AI in project teams: how trust calibration reconfigures team's collaboration and performance
Viraj Dawarka, M. Doargajudhur, Vincent Dutot · Fetched March 18, 2026 · International Journal of Managing Projects in Business
semantic_scholar descriptive low evidence 7/10 relevance DOI Source
Trust in AI within project-based teams is a situational, socially distributed socio-technical process—maintained through ongoing boundary work and trust-calibration practices (enabled by transparency, role clarity, UX, norms, and feedback)—that shapes delegation, communication patterns, and performance outcomes.

As artificial intelligence (AI) becomes increasingly embedded in project-based work, trust calibration, ensuring that trust in AI systems is neither excessive nor insufficient, emerges as a key factor for effective collaboration. This study explores how project professionals calibrate trust in AI and how this process influences team collaboration and performance in technology-mediated project environments. Guided by socio-technical systems theory (STS) complemented by adaptive structuration theory (AST), the study draws on 40 semi-structured interviews with project professionals across diverse UK industries. Thematic analysis is used to explore participants' lived experiences of trust calibration, collaboration mechanisms and perceived team performance in AI-supported settings. The result indicates that trust in AI is situational, socially distributed and shaped through ongoing boundary work between human and machine inputs. Enablers such as transparency, role clarity, user experience, cultural norms and system feedback shape calibration processes. These processes, in turn, influenced collaboration (e.g. delegation of oversight and erosion of informal communication) and performance (e.g. metric-driven evaluation and strategic augmentation of human expertise). This study contributes to project management and AI adoption research by conceptualising trust calibration as a socio-technical process embedded in team routines, rather than as an individual attitude. It offers an initial conceptual model and a revised conceptual model that links enablers, practices, and outcomes of trust calibration, demonstrating how trust mediates the relationship between AI integration, collaboration and performance. Beyond applying existing frameworks, this research extends STS and AST by developing new theoretical insights into trust calibration as a mechanism linking AI design, collaboration dynamics and project performance. Findings provide practical guidance for designing trust-aware, human-centred AI practices in project environments.

Summary

Main Finding

Trust in AI within project-based work is a situational, socially distributed, socio-technical process—created and maintained through ongoing boundary work between humans and machines—rather than a stable individual attitude. Trust calibration (enabled by transparency, role clarity, UX, cultural norms and system feedback) shapes collaboration patterns (e.g., delegation of oversight, erosion of informal communication) and performance outcomes (e.g., metric-driven evaluation, strategic augmentation of human expertise). The authors present an initial and a revised conceptual model linking enablers → trust-calibration practices → collaboration dynamics → project performance.

Key Points

  • Theoretical framing: socio-technical systems theory (STS) and adaptive structuration theory (AST) guide the analysis of how AI integrates with team routines.
  • Trust calibration is embedded in team practices and routines, not solely an individual cognitive state or attitude.
  • Trust is situational and socially distributed: different team members calibrate trust differently depending on role, task and context; trust is negotiated collectively through interactions.
  • Boundary work: teams continuously negotiate which inputs are treated as human versus machine, shaping responsibilities and oversight.
  • Enablers of effective trust calibration: transparency/explainability, clear role definitions, good user experience, supportive cultural norms, and timely system feedback.
  • Collaboration effects: typical changes include delegation of oversight to systems or specialists, changes in who communicates with whom, and erosion of informal, ad hoc communications that previously carried tacit coordination.
  • Performance effects: organizations move toward metric-driven evaluation of AI outputs and often use AI to strategically augment human expertise; risks include overreliance or inappropriate metric focus.
  • Contribution: advances theory by reframing trust calibration as a socio-technical mechanism linking AI design and team dynamics; provides a conceptual model for design and governance of human-centered AI in projects.

Data & Methods

  • Empirical base: 40 semi-structured interviews with project professionals across multiple industries in the UK.
  • Sampling: cross-industry project practitioners (roles and industries not enumerated in detail in the summary).
  • Analysis: thematic qualitative analysis to derive patterns in lived experience of trust calibration, collaboration mechanisms, and perceived performance.
  • Theory integration: findings interpreted through STS and AST, with theoretical extension to model trust calibration as a mediating socio-technical process.
  • Nature of evidence: qualitative, interpretive—rich contextual insights but not quantitative effect sizes.

Implications for AI Economics

  • Productivity and complementarities
    • Trust calibration determines the extent to which AI is used as a complement to human labor versus a substitute. Well-calibrated trust encourages complementary use (augmentation), raising effective productivity; miscalibration can lead to over/underuse and productivity losses.
  • Measurement and incentive design
    • Metric-driven evaluation of AI outputs can reorient incentives and induce gaming or narrow optimization. Economists should account for shifts in measured output vs. unmeasured coordination/tacit work when evaluating AI’s productivity effects.
  • Monitoring, transaction costs and governance
    • Delegation of oversight and reallocation of monitoring tasks change transaction costs within teams and firms. These shifts affect optimal organizational design, contracting, and investments in verification/audit mechanisms.
  • Labor demand, skills and task allocation
    • Socially distributed trust and boundary work influence task allocation and skill requirements: demand likely increases for roles managing AI oversight, explanation, and boundary negotiation (e.g., AI integrators, translators), while routine roles may be displaced or reframed.
  • Diffusion and adoption dynamics
    • Organizational norms and UX affect adoption rates. Economists modeling diffusion should incorporate social calibration processes and team-level mediators, not only individual cost–benefit calculations.
  • Externalities and market outcomes
    • Erosion of informal communication and tacit coordination may impose negative externalities on team efficiency not captured in short-run metrics; conversely, improved calibration can generate positive externalities via better collective decision-making.
  • Policy and regulation implications
    • Policies promoting transparency, standard feedback channels, and auditability can improve trust calibration and hence economic returns to AI investments. Subsidies for training in AI oversight roles or standards for explainability could increase welfare by reducing miscalibration costs.
  • Empirical research priorities for economists
    • Quantify how trust calibration mediates AI’s effect on productivity, error rates, and team-level outcomes.
    • Estimate labor reallocation between oversight/coordination roles and task-execution roles.
    • Measure the welfare effects of metric-driven evaluation and potential gaming induced by AI integration.

Practical takeaways for economists advising firms or policymakers: evaluate AI investments not just by algorithmic performance but by anticipated effects on team routines, role clarity, monitoring needs, and measurement systems; incorporate socio-technical interventions (transparency, training, feedback loops, governance) into cost–benefit and diffusion models.

Assessment

Paper Typedescriptive Evidence Strengthlow — Findings are based on qualitative, interpretive evidence from 40 semi-structured interviews; the study offers rich, plausibility-building insights but provides no causal identification, effect sizes, or generalizable estimates, so it cannot establish causal magnitudes or population-level effects. Methods Rigormedium — The study uses appropriate qualitative methods (theoretically informed semi-structured interviews and thematic analysis grounded in STS and AST) and draws on a reasonably sized interview sample, but it lacks detail on sampling strategy, respondent characteristics, triangulation with other data sources, and transparency about coding/validation procedures, which limits reproducibility and robustness. Sample40 semi-structured interviews with project professionals across multiple industries in the UK; roles and industries are not fully enumerated in the summary, sampling appears purposive/cross-industry but non-random, and data are qualitative self-reports about practices, perceptions and routines. Themeshuman_ai_collab productivity org_design skills_training adoption GeneralizabilityNon-random, small qualitative sample — not population-representative, UK-only context — cultural and regulatory differences may limit applicability elsewhere, Focused on project-based work — may not apply to continuous operational roles or other organizational forms, Roles and industry sectors not fully described — unclear coverage of firm sizes, sectoral tech maturity, or AI use-cases, Cross-sectional interview data — limited ability to assess dynamics over time or causal direction

Claims (13)

ClaimDirectionConfidenceOutcomeDetails
Trust in AI within project-based work is situational and socially distributed across team members, rather than a stable individual attitude. Team Performance mixed medium trust in AI (nature/distribution of trust across individuals and situations)
n=40
0.05
Trust calibration is produced and maintained through ongoing boundary work between humans and machines (i.e., teams continuously negotiate which inputs/responsibilities are treated as human versus machine). Task Allocation mixed medium trust calibration practices / boundary work (who is responsible for tasks/inputs)
n=40
0.05
Five enablers support effective trust calibration: transparency/explainability, clear role definitions, good user experience (UX), supportive cultural norms, and timely system feedback. Team Performance positive medium quality/appropriateness of trust calibration
n=40
0.05
Trust calibration shapes collaboration patterns, including delegation of oversight to systems or specialists, changes in communication networks (who talks to whom), and erosion of informal ad hoc communications used previously for tacit coordination. Team Performance mixed medium collaboration dynamics (oversight delegation, communication patterns, informal coordination)
n=40
0.05
Trust calibration influences project performance outcomes: organizations tend toward metric-driven evaluation of AI outputs and use AI to strategically augment human expertise, but miscalibration risks overreliance or inappropriate metric focus that can harm performance. Output Quality mixed medium project performance (measured outputs, augmentation of expertise, error rates/quality as perceived)
n=40
0.05
Trust in AI should be conceptualized as a socio-technical, team-level mechanism (trust calibration) that mediates between AI design/enablers and downstream collaboration and performance, rather than an individual-level stable attitude. Research Productivity positive medium conceptual framing (mediating mechanism linking design/enablers to collaboration/performance)
n=40
0.05
Well-calibrated trust tends to encourage AI being used as a complement to human labor (augmentation), increasing effective productivity; miscalibration (over- or under-trust) can lead to productivity losses. Firm Productivity positive low productive use of AI (complementarity vs substitution) and effective productivity
n=40
0.03
Delegation of oversight and reallocation of monitoring tasks due to AI integration changes transaction costs and affects organizational design and governance needs (e.g., more verification/audit effort or specialist oversight roles). Organizational Efficiency neutral low transaction/monitoring costs and governance arrangements
n=40
0.03
Socially distributed trust and boundary work will increase demand for roles focused on AI oversight, explanation, and boundary negotiation (e.g., AI integrators, translators), while routine roles may be displaced or reframed. Task Allocation mixed low labor demand and task allocation (demand for oversight/expertise roles vs routine roles)
n=40
0.03
Organizational norms and UX influence adoption rates and diffusion of AI: social calibration processes at the team level matter for adoption beyond individual cost–benefit calculations. Adoption Rate positive low AI adoption/diffusion rates at team/organization level
n=40
0.03
Erosion of informal communication and tacit coordination driven by AI integration can create negative externalities on team efficiency that are not captured by short-run metrics. Team Performance negative low team efficiency and unmeasured coordination/tacit work
n=40
0.03
Policy interventions that promote transparency, standardized feedback channels, auditability, and training for oversight roles can improve trust calibration and economic returns to AI investments. Governance And Regulation positive speculative quality of trust calibration and economic returns from AI investments
n=40
0.01
The study's empirical base consists of 40 semi-structured interviews with cross-industry project practitioners in the UK, analyzed using thematic qualitative methods. Research Productivity null_result high study sample and methodology (empirical basis)
n=40
0.09

Notes