The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

A new Trust–Complementarity model argues organisations make better knowledge‑intensive decisions when calibrated trust in AI is paired with complementary human–machine capability use, creating feedback loops that amplify learning; the framework gives executives levers (training, metrics, team design) but remains untested empirically.

Optimising Human– AI Decision Performance: A Trust and Capability Framework for Knowledge Management
Eduardo Carlos Dittmar, Martin Sposato · Fetched March 17, 2026 · Knowledge and Process Management
semantic_scholar theoretical n/a evidence 7/10 relevance DOI Source
The Trust–Complementarity Model of Collective Intelligence argues that optimal organisational decision performance arises when calibrated trust in AI combines with complementary utilisation of human and AI capabilities, reinforced by dynamic feedback loops.

Organisations struggle to optimise human–AI collaboration in knowledge‐intensive decision‐making. This paper proposes the Trust–Complementarity Model of Collective Intelligence (TCM‐CI), explaining how calibrated trust and complementary capability utilisation drive superior organisational performance. Through systematic synthesis of human–AI interaction and knowledge management research, we identify three core mechanisms: (1) calibrated trust maximises collective intelligence by balancing appropriate reliance with necessary oversight, (2) complementarity–trust interaction determines optimal performance when high capability utilisation combines with appropriate trust levels and (3) dynamic feedback loops create reinforcing organisational learning cycles. The framework provides practical guidance for executives designing human–AI teams, developing trust calibration training, and establishing performance metrics. By integrating psychological trust factors with cognitive capability optimisation, this model offers actionable insights for knowledge management practitioners implementing AI‐augmented decision systems while advancing theoretical understanding of human–AI collaboration effectiveness.

Summary

Main Finding

The paper introduces the Trust–Complementarity Model of Collective Intelligence (TCM‑CI): organisational performance in knowledge‑intensive decision making is maximised when calibrated trust in AI is combined with high utilisation of complementary human and AI capabilities, and when dynamic feedback loops embed learning. Calibrated trust, capability complementarity, and reinforcing organisational feedbacks together explain when and why human–AI teams outperform human or AI alone.

Key Points

  • Core mechanisms
    • Calibrated trust: appropriate reliance on AI outputs paired with necessary human oversight prevents both underuse (distrust) and overreliance (automation bias).
    • Complementarity–trust interaction: optimal outcomes occur when high capability utilisation (each agent — human or AI — applied to tasks it does best) aligns with calibrated trust; misalignment reduces or reverses gains.
    • Dynamic feedback loops: performance metrics and learning processes create reinforcing cycles (better metrics → better trust calibration/training → improved capability utilisation → higher performance).
  • Practical guidance for organisations: design teams around complementary task allocation, invest in trust‑calibration training, and implement performance metrics and feedback mechanisms to sustain learning.
  • Theoretical contribution: integrates psychological models of trust with cognitive capability optimisation to explain collective intelligence in AI‑augmented decision systems.

Data & Methods

  • Approach: systematic synthesis of literature across human–AI interaction, trust psychology, collective intelligence, and knowledge management. The contribution is primarily conceptual/theoretical rather than empirical.
  • Methods used in the paper: structured review and cross‑disciplinary integration to derive a unifying model (TCM‑CI), identification of mechanisms from prior empirical and theoretical studies, and formulation of actionable design principles.
  • Evidence base: aggregated findings from experimental studies, field observations, and organisational research on trust calibration, automation bias, task allocation, and learning/feedback effects. (No new primary quantitative dataset reported.)

Implications for AI Economics

  • Complementarity versus substitution: TCM‑CI operationalises when AI acts as a complement (higher productivity through specialised task allocation and calibrated trust) versus when it effectively substitutes human capital (potential displacement when trust is miscalibrated or complementarities are unused).
  • Returns to AI investment: value from AI depends not only on model accuracy but on organisational capability to calibrate trust and redeploy human skills—suggesting large heterogeneity in measured treatment effects of AI across firms and tasks.
  • Dynamic productivity gains: feedback loops imply non‑linear, path‑dependent productivity increases (learning-by-doing in human–AI coordination). Short‑run estimates may understate long‑run returns if learning is not accounted for.
  • Measurable constructs for empirical work: calibrated trust indices (survey/behavioral measures), capability utilisation rates (task mapping and time allocation), performance metrics (decision accuracy, speed, error rates), and learning loop indicators (rate of trust adjustment, frequency of feedback updates).
  • Policy and managerial relevance: policies encouraging transparency, explainability, training subsidies, and performance reporting can increase aggregate welfare by improving trust calibration and complementarity. Managers should focus on reallocation of tasks and measurement systems to capture combined human–AI value.
  • Research agenda for economists: estimate causal impacts of trust‑calibration interventions (RCTs, field experiments), incorporate trust/complementarity parameters into production‑function and firm‑level models, quantify heterogeneity across industries and task types, and model dynamic trajectories of human–AI adoption and learning.

Assessment

Paper Typetheoretical Evidence Strengthn/a — Paper proposes a conceptual model based on literature synthesis and does not present new empirical tests or causal estimates, so it provides no direct causal evidence. Methods Rigormedium — Presents a systematic synthesis and integrates multiple literatures (human–AI interaction, knowledge management, psychology) to build a coherent framework, but the abstract gives no details on search/selection protocols, weighting of evidence, or formal model validation, and no empirical evaluation is reported. SampleA systematic synthesis of prior research in human–AI interaction and knowledge management; no original datasets or primary empirical sample are used (scope and selection criteria not specified in abstract). Themeshuman_ai_collab org_design productivity skills_training adoption GeneralizabilityModel is conceptual and not empirically validated, limiting claims about real-world causal effects, Focused on knowledge‑intensive decision‑making; may not apply to routine or low‑skill tasks, Assumes availability of capable AI systems and organisational data infrastructures, Neglects potential cross‑industry, firm‑size, and cross‑cultural variation in trust dynamics, Presumes managers can measure and calibrate trust and capability utilisation—may be hard to implement in practice

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
Organisations struggle to optimise human–AI collaboration in knowledge‑intensive decision‑making. Organizational Efficiency negative medium ability to optimise human–AI collaboration / effectiveness of knowledge‑intensive decision‑making
0.01
The Trust–Complementarity Model of Collective Intelligence (TCM‑CI) explains how calibrated trust and complementary capability utilisation drive superior organisational performance. Organizational Efficiency positive medium organisational performance
0.01
Calibrated trust maximises collective intelligence by balancing appropriate reliance with necessary oversight. Team Performance positive medium collective intelligence (performance of human–AI team decision‑making)
0.01
Complementarity–trust interaction determines optimal performance when high capability utilisation combines with appropriate trust levels. Decision Quality positive medium optimal performance of human–AI teams / decision outcomes
0.01
Dynamic feedback loops create reinforcing organisational learning cycles. Organizational Efficiency positive medium organisational learning / reinforcement of human–AI collaboration practices
0.01
The framework provides practical guidance for executives designing human–AI teams, developing trust calibration training, and establishing performance metrics. Training Effectiveness positive low practical outcomes (team design quality, training effectiveness, performance measurement adoption)
0.01
By integrating psychological trust factors with cognitive capability optimisation, this model offers actionable insights for knowledge management practitioners implementing AI‑augmented decision systems while advancing theoretical understanding of human–AI collaboration effectiveness. Organizational Efficiency positive low actionability for practitioners / advancement of theoretical understanding / overall effectiveness of human–AI collaboration
0.01
The paper identifies three core mechanisms underlying calibrated trust and complementarity: (1) calibrated trust balancing reliance and oversight, (2) complementarity–trust interaction for optimal performance, and (3) dynamic feedback loops producing reinforcing learning cycles. Other null_result high n/a (identification of theoretical mechanisms)
0.02

Notes