A survey of 627 studies identifies four distinct AI–human decision-making paradigms driven by AI–human dynamics and decision typologies; the framework helps organizations choose between intuitive, algorithmic, analytical and hybrid decision modes as they integrate AI into operations.
Abstract The interplay between humans and artificial intelligence (AI) in decision-making has become increasingly intricate and significant. Despite rapid advancements, the literature remains fragmented, with limited integrative frameworks to explain how AI-human dynamics and decision-making typologies shape outcomes. This study addresses this critical gap by conducting a systematic review and bibliometric analysis of 627 articles, culminating in a novel conceptual framework. The framework identifies two critical dimensions, AI-human dynamics and decision typologies, that shape decision outcomes and introduces four distinct paradigms of AI-human collaborative decision-making: adaptive intuitive decision, programmed algorithmic decision, interpretive analytical decision and integrative hybrid decision. By synthesizing these paradigms, this research advances the theoretical understanding of hybrid decision-making systems and provides actionable insights for organizations navigating complex and AI-driven environments. By elucidating the mechanisms and trade-offs inherent in AI-human collaboration, this work lays a robust foundation for future research on adaptive decision systems in an era marked by accelerating technological change.
Summary
Main Finding
Li & Tian (2026) synthesize 627 management-focused studies on AI–human decision-making and propose a conceptual framework that locates collaborative decision processes along a bounded–augmented rationality continuum. They identify four distinct AI–human decision-making paradigms—adaptive intuitive, programmed algorithmic, interpretive analytical, and integrative hybrid—each reflecting different distributions of cognitive labor, levels of autonomy, and decision contexts. The work argues that AI can substitute for, augment, or reconfigure human rationality depending on the paradigm and decision stage.
Key Points
- Literature gap and approach
- Research on AI–human collaboration is fragmented across IS, management, psychology, and engineering. The authors combine a systematic literature review (SLR) with bibliometric mapping to integrate these streams.
- Four paradigms of AI–human collaborative decision-making
- Adaptive intuitive decision: human-centered, heuristic and experience-driven decisions where AI mainly supports perception/acceptance and small-scale augmentation.
- Programmed algorithmic decision: algorithm-dominant, structured automation (predictive models, optimization) where AI prescribes decisions or executes routine choices.
- Interpretive/reflective analytical decision: joint human–AI analytic deliberation; humans interpret models, manage uncertainty, and handle ethics/bias.
- Integrative hybrid decision: exploratory, generative, high-uncertainty contexts (e.g., innovation); humans and AI co-create alternatives and strategies.
- Theoretical framing
- Uses bounded rationality as the core lens; reframes AI as a co-constructor of rationality that can extend, redistribute, or impose new bounds on human decision-making.
- Proposes a bounded–augmented rationality continuum to capture how rationality shifts with AI capability, human literacy/trust, and task context.
- Risks and trade-offs
- AI can reduce informational and computational constraints but may introduce new constraints: model opacity, bias amplification, misaligned training data, algorithmic dominance or underutilization (due to algorithm aversion/appreciation).
- Contributions
- Cross-disciplinary synthesis, typology of four paradigms, and the bounded–augmented rationality concept to guide future empirical and theoretical work.
Data & Methods
- Data source and scope
- Web of Science Core Collection (SCI-EXPANDED + SSCI), coverage 1956–Oct 2025.
- Initial keyword search combining AI-related terms and decision-making terms produced ~56,616 records; filtered to Management & Business categories and A/A* journals per the Australian Business Deans Council.
- Document-type restricted to articles and reviews; after abstract/keyword screening and duplicate removal the final sample was 627 articles.
- Analysis methods
- Systematic literature review for theoretical synthesis.
- Bibliometric science mapping using VOSviewer:
- Unit: all keywords; full counting.
- Link strength: association strength normalization.
- Keyword co-occurrence network clustered into four thematic groups (visualized with force-directed layout).
- PRISMA-style selection and thresholding applied to ensure conceptual salience.
- Methodological caveats (implicit)
- Dataset restricted to management/business and top-ranked journals, so technical computer-science literature and gray literature are underrepresented.
- Keyword selection and WoS indexing choices affect coverage and generalizability.
Implications for AI Economics
- Modeling firm behavior and decision-making
- Incorporate paradigm heterogeneity: economic models should distinguish decision contexts where AI substitutes (programmed algorithmic) versus complements (interpretive/ integrative) labor and decision authority.
- Endogenize the distribution of cognitive labor: treat the allocation of tasks between humans and AI as a firm-level decision influenced by AI capability, worker skills, trust, and regulatory constraints.
- Productivity, adoption, and complementarities
- Productivity gains will vary by paradigm: large efficiency returns in algorithmic/routine domains; potentially larger innovation returns in hybrid paradigms but with higher uncertainty.
- Predict non-linear adoption dynamics driven by human literacy, algorithm aversion/appreciation, and organizational capability to interpret/model outputs.
- Reassess capital–labor substitution: AI may replace some routine cognitive tasks while complementing high-level interpretive skills, altering skill-biased technical change models.
- Market structure, competition, and strategy
- Algorithmic decision modes can centralize decision authority and scale standardized strategies, potentially increasing returns to scale and market concentration in some industries.
- Hybrid/generative paradigms may shift competitive advantage toward firms that combine AI capabilities with organizational learning and interpretive human capital.
- Welfare, distributional effects, and inequality
- Biases and data underrepresentation embedded in AI systems can translate into systematic market frictions and distributional harms (e.g., credit, hiring), requiring economists to quantify welfare impacts of algorithmic bias.
- Labor market implications: demand rises for diagnostic/interpretive skills; routine decision roles shrink—implications for retraining, wage distribution, and unemployment risk.
- Policy and regulation
- Regulation should be paradigm-sensitive: algorithmic regimes need standards for validation, auditing, and liability; interpretive and hybrid regimes need transparency, explainability, and norms for human oversight.
- Antitrust and governance frameworks must consider how AI-enabled decision centralization affects competitive dynamics.
- Measurement and empirical work
- Need for new microdata capturing AI usage type (paradigm), degree of autonomy, human oversight, and decision outcomes to identify causal effects on productivity, prices, and labor demand.
- Encouraged methods: field experiments on algorithmic assistance, firm-level panel data linking AI deployment to performance, matched employer-employee studies to trace skill complementarities.
- Research agenda suggestions
- Theorize and empirically test how bounded–augmented rationality shifts firm optimality conditions, investment in human capital, and organizational design.
- Study cross-industry heterogeneity in which paradigms dominate and the implications for sectoral productivity and inequality.
- Incorporate behavioral responses (algorithm aversion/appreciation) into economic models of adoption and diffusion.
Concise takeaway: the paper provides a useful, management-oriented typology and theoretical lens (bounded–augmented rationality) that economists can operationalize to model heterogeneous impacts of AI on firm decisions, market structure, productivity, labor demand, and welfare — but empirical work will require richer, paradigm-aware data and paradigm-specific policy responses.
Assessment
Claims (6)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| We conducted a systematic review and bibliometric analysis of 627 articles. Other | null_result | high | number of articles reviewed / literature corpus |
n=627
0.4
|
| The literature remains fragmented, with limited integrative frameworks to explain how AI-human dynamics and decision-making typologies shape outcomes. Research Productivity | negative | high | degree of integration/coherence of the academic literature; presence of integrative frameworks |
n=627
0.24
|
| We developed a novel conceptual framework that identifies two critical dimensions, AI-human dynamics and decision typologies, that shape decision outcomes. Decision Quality | positive | high | identification of critical dimensions affecting decision outcomes |
n=627
0.24
|
| The framework introduces four distinct paradigms of AI-human collaborative decision-making: adaptive intuitive decision, programmed algorithmic decision, interpretive analytical decision and integrative hybrid decision. Decision Quality | positive | high | classification of AI-human collaborative decision-making into four paradigms |
n=627
0.24
|
| By synthesizing these paradigms, this research advances the theoretical understanding of hybrid decision-making systems and provides actionable insights for organizations navigating complex and AI-driven environments. Decision Quality | positive | high | theoretical advancement and provision of actionable organizational insights |
n=627
0.24
|
| By elucidating the mechanisms and trade-offs inherent in AI-human collaboration, this work lays a robust foundation for future research on adaptive decision systems. Research Productivity | positive | high | foundation for future research on adaptive decision systems |
n=627
0.04
|