The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI boosts organizational decision-making only when paired with human judgment and institutional design; treating algorithms as replacements risks poorer outcomes. Firms must design task allocation, feedback, interpretability, and accountability systems so algorithmic strengths amplify—not displace—managerial sense-making.

Reframing Organizational Decision-Making in the Age of Artificial Intelligence: A Conceptual Review of Human–AI Augmentation
Dr. Pratik B. Upase · Fetched March 15, 2026 · International Journal of Scientific Research in Engineering and Management
semantic_scholar review_meta n/a evidence 8/10 relevance DOI Source PDF
AI is best understood as an augmentation mechanism for managerial judgment—decision quality and organizational outcomes emerge from interactions among human judgment, algorithmic intelligence, and organizational context, so effective use requires socio-technical design, governance, and preserved human oversight.

Abstract The increasing integration of artificial intelligence (AI) into organizational decision-making has fundamentally reshaped how managers analyze information, evaluate alternatives, and exercise judgment. Traditional decision-making theories emphasize human cognition, experience, and intuition, yet extensive research demonstrates that human judgment is constrained by bounded rationality, cognitive biases, and information-processing limitations. In parallel, advances in algorithmic intelligence have enabled organizations to augment human decision-making through data-driven insights, predictive analytics, and automated reasoning systems. Despite growing adoption, existing research on AI-driven decision-making remains fragmented and often framed through substitution-oriented narratives that position AI as a replacement for human judgment. This study presents a conceptual meta-analysis of interdisciplinary literature on AI-augmented decision-making in organizations. By synthesizing research from decision sciences, management, and information systems, the paper traces the evolution of organizational decision-making from human-centric models to hybrid human–AI systems. Building on this synthesis, the study develops an integrative conceptual framework that explains how human judgment, algorithmic intelligence, and organizational context interact to shape decision quality and organizational outcomes. The paper contributes to theory by reframing AI as an augmentation mechanism rather than a substitute for managerial judgment and by extending organizational decision theory to account for socio-technical decision systems. It further identifies key research gaps and proposes a future research agenda focused on human–AI interaction, organizational governance, and ethical accountability. From a practical perspective, the study highlights the importance of designing decision systems that leverage AI’s analytical strengths while preserving human oversight, responsibility, and strategic sense-making. Keywords: AI-augmented decision-making; human judgment; algorithmic intelligence; organizational decision-making; conceptual meta-analysis

Summary

Main Finding

The paper argues that AI in organizations should be conceptualized primarily as an augmentation mechanism for human judgment—not a pure substitute. Through a conceptual meta-analysis, the author builds an integrative socio-technical framework showing that decision quality and organizational outcomes emerge from structured interaction between human judgment, algorithmic intelligence, and organizational context. Value arises from complementarity, human oversight, and governance, not raw automation.

Key Points

  • Framing shift: Moves from substitution-centric narratives (AI replaces managers) to augmentation/complementarity (human + algorithm hybrid cognitive system).
  • Strengths and limits:
    • Human judgment: context sensitivity, tacit knowledge, ethical reasoning, accountability; limited by bounded rationality and cognitive biases.
    • Algorithmic intelligence: scalability, pattern recognition, consistency, speed; limited by data bias, contextual blindness, opacity, and ethical/legitimacy concerns.
  • Interaction matters: Decision outcomes depend on how human actors perceive, interpret, and interact with algorithmic outputs (e.g., automation bias, algorithm aversion).
  • Core assumptions of the proposed framework:
  • Decision-making is socio-technical.
  • Human and algorithmic capabilities are complementary.
  • Decision quality depends on interaction design, not mere substitution.
  • Organizational context (governance, culture, decision rights) conditions AI effectiveness.
  • Conceptual constructs: human judgment (context, ethics, responsibility), algorithmic intelligence (ML, predictive analytics), and mediating/organizational factors (interface design, governance, training, incentives).
  • Research gaps/high-priority agenda: empirical tests of human–AI complementarities, design of human-in-the-loop systems, organizational governance of AI, metrics for accountability, distributional and ethical consequences.

Data & Methods

  • Type of study: Conceptual meta-analysis / thematic meta-synthesis (no new empirical data).
  • Literature base: Interdisciplinary synthesis across decision sciences, management, information systems, operations research, and AI/CS scholarship. Key reference anchors include Simon (bounded rationality), Kahneman & Tversky (heuristics/bias), Brynjolfsson & McAfee (AI and productivity), Jarrahi (human–AI interaction), Dietvorst et al. (algorithm aversion), Parasuraman & Riley (automation bias).
  • Methodological approach:
    • Systematic thematic synthesis of prior studies to identify recurring patterns, complementarities, and tensions.
    • Theory-building via integration of behavioral, technical, and organizational findings into a conceptual framework.
  • Limitations noted: conceptual (theory-building) rather than empirical; calls for future empirical programs to operationalize and test framework constructs.

Implications for AI Economics

  • Modeling implications:
    • Replace pure-substitution models with task-based models that allow complementarity between AI and labor (e.g., AI augments decision tasks, shifting productive marginal products).
    • Incorporate organizational frictions (governance costs, monitoring, trust) and interaction/design features into production functions and adoption choice models.
    • Model heterogeneous firm adoption where returns to AI depend on managerial capability, data assets, and organizational governance—explaining firm-level productivity dispersion.
  • Labor-market and distributional effects:
    • Expect skill-biased augmentation: complementarities can raise demand for workers who interpret, govern, and coordinate AI (upskilling), while repetitive rule-based tasks may be automated.
    • Short-term displacement risk is mediated by firms’ ability to reallocate tasks and retrain; long-term wage and employment effects depend on complementarity strength and diffusion.
  • Measurement and empirical strategy recommendations:
    • Use firm-level panel data on AI adoption, task content (task-level measures), productivity, and wages to estimate complementarity elasticities.
    • Exploit randomized interventions (A/B tests) or natural experiments where algorithmic aids are rolled out to measure causal effects on decision quality, outcomes, and labor inputs.
    • Collect microdata on decision outcomes (accuracy, speed, consistency), human oversight intensity, interface design, and governance practices.
  • Market structure and competition:
    • Data- and algorithm-driven advantages may create increasing returns to scale and concentration—economists should model how data access and algorithmic performance translate into market power.
    • Assess regulatory implications for transparency, liability, and fairness that can affect adoption costs and competitive dynamics.
  • Policy implications:
    • Policies should support retraining and complementary human capabilities, mandate transparency/record-keeping for high-stakes decisions, and design governance incentives to ensure accountability.
    • Antitrust and privacy regulation considerations: data governance affects entry, competition, and distribution of gains from AI.
  • Empirical variables to operationalize from the framework:
    • Algorithm accuracy/performance; error types and frequency.
    • Human oversight level (manual override rates, review time).
    • Decision quality metrics (error rates, downstream outcomes, welfare measures).
    • Organizational governance (decision rights, audit processes, incentives).
    • Labor outcomes (task reallocation, wages, employment by task).

Overall, the paper suggests economists studying AI should move beyond “substitute vs. complement” rhetoric to model the institutional and interactional channels through which algorithmic tools augment human decision-making, and empirically quantify how these socio-technical complementarities shape productivity, distributional outcomes, and market structure.

Assessment

Paper Typereview_meta Evidence Strengthn/a — This is a conceptual synthesis and theoretical framework with no primary empirical estimation or causal identification; it draws on existing empirical and theoretical literatures but does not produce new causal evidence. Methods Rigormedium — The paper systematically integrates interdisciplinary literatures and builds a coherent socio-technical framework, showing careful theoretical reasoning and literature mapping; however, it lacks formal empirical tests, formal models with derived predictions, or systematic meta-analytic quantitative synthesis that would raise rigor to high. SampleNo primary sample or new dataset; the paper synthesizes prior work across decision sciences, management, information systems, and relevant empirical studies on algorithms and organizational decision-making. Themeshuman_ai_collab org_design governance productivity skills_training labor_markets GeneralizabilityConceptual rather than empirical — conclusions are not directly quantified and require empirical validation across contexts, Framework may perform differently across sectors (e.g., high-frequency trading vs. healthcare) where decision stakes, regulatory constraints, and data availability differ, Relies on extant literature which may be biased toward settings with published studies (Anglo-American firms, certain industries), Does not produce parameter estimates that can be directly applied to macro or cross-country policy or aggregate productivity projections

Claims (9)

ClaimDirectionConfidenceOutcomeDetails
The increasing integration of artificial intelligence (AI) into organizational decision-making has fundamentally reshaped how managers analyze information, evaluate alternatives, and exercise judgment. Decision Quality mixed medium managerial decision processes (information analysis, alternative evaluation, judgment)
0.02
Human judgment is constrained by bounded rationality, cognitive biases, and information-processing limitations. Decision Quality negative high human judgment accuracy/quality and cognitive processing capacity
0.04
Advances in algorithmic intelligence have enabled organizations to augment human decision-making through data-driven insights, predictive analytics, and automated reasoning systems. Decision Quality positive medium augmentation of decision-making (availability/use of data-driven insights, predictive analytics, automated reasoning)
0.02
Existing research on AI-driven decision-making remains fragmented and often framed through substitution-oriented narratives that position AI as a replacement for human judgment. Research Productivity negative medium research framing (substitution-oriented vs augmentation-oriented narratives in literature)
0.02
This study presents a conceptual meta-analysis of interdisciplinary literature on AI-augmented decision-making in organizations. Research Productivity null_result high scope and integration of interdisciplinary literature (conceptual synthesis)
0.04
The paper develops an integrative conceptual framework that explains how human judgment, algorithmic intelligence, and organizational context interact to shape decision quality and organizational outcomes. Decision Quality positive high decision quality and organizational outcomes as shaped by interaction among human judgment, algorithmic intelligence, and context
0.04
The study reframes AI as an augmentation mechanism rather than a substitute for managerial judgment and extends organizational decision theory to account for socio-technical decision systems. Decision Quality positive high theoretical framing of AI's role in organizational decision theory (augmentation vs substitution)
0.04
The paper identifies key research gaps and proposes a future research agenda focused on human–AI interaction, organizational governance, and ethical accountability. Research Productivity null_result high presence and topics of recommended future research (human–AI interaction, governance, ethics)
0.04
From a practical perspective, the study highlights the importance of designing decision systems that leverage AI’s analytical strengths while preserving human oversight, responsibility, and strategic sense-making. Decision Quality positive medium design principles for decision systems (balance of AI analytics and human oversight/responsibility/sense-making)
0.02

Notes