The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Treat analytics as the interpretive bridge: a five-layer Human-AI decision framework argues that combining AI outputs with business-analytics interpretation, managerial judgment and feedback loops can raise decision quality, reduce algorithmic bias and boost employee confidence; however, the framework is conceptual and requires empirical validation to quantify productivity and labor impacts.

Designing Human–AI Collaborative Decision Analytics Frameworks to Enhance Managerial Judgment and Organizational Performance
Jannatul Ferdousi, Md Shokran, Md Saiful Islam · Fetched March 15, 2026 · Journal of business and management studies
semantic_scholar theoretical low evidence 7/10 relevance DOI Source
The paper proposes a five-layer Human-AI Collaborative Decision Analytics Framework that positions analytics as the interpretive bridge between AI outputs and managerial judgment to improve decision quality, accountability, and workforce confidence while mitigating automation bias.

The fast spread of artificial intelligence (AI) in the United States organizations has radically altered the managerial decision-making process, but on the other hand, it has augmented the complexity and uncertainty in decision, and the accountability stresses. Despite the high-level predictive and prescriptive potentials of AI-based analytics, most organizations have difficulties converting algorithmic results into sustainable managerial decisions. Low levels of trust, lack of explanation, and poor integration between AI systems and human judgment have been caused by over reliance on automation, weak explain ability, and poor organizational outcomes. Current literature has majorly focused on automation-based views of decision support, with a severe lack of insight into the coordinated manner in which human experience and AI intelligence can be systematically integrated with the assistance of analytics. This paper fills this gap by outlining a Human-AI Collaborative Decision Analytics Framework that could be beneficial to improve managerial decisions and organizational performance. Following a conceptual research design, the study integrates interdisciplinary literature in the field of managerial decision-making theory, business analytics, and governance of AI in its attempt to establish an integrative framework where analytics becomes the focal interpretive intercession between AI outputs and human decision-makers. The framework has five overlapping layers such as data, AI analytics, business analytics interpretation, human judgment, and feedback learning that combine to facilitate transparency, accountability, and contextual decision-making. The framework is depicted in the most important areas of the organization with the main focus on the strategic management and workforce decision-making and the secondary focus on the finance, operations, and marketing. The framework minimizes the effects of the algorithmic bias, automation bias, and enhances workforce confidence through embedding managerial control and ethical reasoning and contextual evaluation frameworks into the workflows of AI-assisted decision-making. The contribution of the study to the theory is that it develops human-grounded decision analytics and to practice by providing practical advice to executives and analytics leaders. The presented framework contributes to the responsible use of AI, productivity, and economic competitiveness in the United States in the long term.

Summary

Main Finding

A Human‑AI Collaborative Decision Analytics Framework can improve managerial decision-making and organizational performance by positioning analytics as the interpretive bridge between AI outputs and human judgment. The framework’s five overlapping layers (data, AI analytics, business-analytics interpretation, human judgment, feedback learning) enhance transparency, accountability, contextualization, and workforce confidence, reducing automation and algorithmic biases and supporting responsible AI adoption.

Key Points

  • Problem: Rapid AI deployment has increased decision complexity and accountability stress; organizations struggle to translate algorithmic outputs into sustainable managerial decisions due to low trust, weak explainability, overreliance on automation, and poor integration with human judgment.
  • Conceptual contribution: Proposes a multi‑layer Human‑AI Collaborative Decision Analytics Framework that treats analytics as the focal interpretive intervention between machine outputs and managers.
  • Framework layers:
    • Data — quality, provenance, preprocessing.
    • AI analytics — model development and outputs.
    • Business-analytics interpretation — translating model outputs into actionable insights.
    • Human judgment — managerial contextualization, ethical reasoning, control.
    • Feedback learning — monitoring outcomes and updating data/models/workflows.
  • Organizational focus: Primary emphasis on strategic management and workforce decision-making; secondary relevance for finance, operations, and marketing.
  • Governance and safeguards: Embeds managerial controls, ethical reasoning, and contextual evaluation to mitigate algorithmic and automation biases and to boost employee confidence.
  • Practical value: Offers actionable guidance for executives and analytics leaders to operationalize collaborative AI decision processes.
  • Theoretical value: Advances a human‑grounded view of decision analytics, moving beyond purely automation-centric perspectives.

Data & Methods

  • Research design: Conceptual, integrative literature review.
  • Sources: Interdisciplinary synthesis across managerial decision-making theory, business analytics, and AI governance (no original empirical dataset reported).
  • Methodological scope: Framework development through theory integration and conceptual mapping of decision layers and organizational domains.
  • Limitations of method: No empirical validation, causal inference, or quantitative effect sizes provided; implementation complexity and heterogeneity across firms/sectors not empirically assessed.

Implications for AI Economics

  • Productivity & growth: Effective Human‑AI decision integration may increase firm-level productivity and aggregate competitiveness by improving decision quality and reducing costly errors from misapplied automation.
  • Diffusion & returns to AI investment: Economic returns to AI depend on organizational ability to implement interpretive analytics and human oversight—explainability, training, and governance are key complements that shape adoption rates and realized gains.
  • Labor and skills: Framework emphasizes augmenting human judgment, implying demand for managerial/analytic skills, training investments, and potential reshaping (not pure replacement) of some job tasks.
  • Risk mitigation & regulation: Embedding accountability and ethical controls can lower regulatory/compliance risks and reduce social costs associated with biased or opaque AI-driven decisions.
  • Measurement & policy research needs: Calls for empirical work measuring how the framework’s practices affect productivity, hiring, wages, sectoral adoption, and distributional outcomes; need for cost–benefit analyses of implementation and heterogeneous impacts across firm size and industries.
  • Directions for economists: Testable hypotheses include (a) firms that implement interpretive analytics and feedback loops achieve higher ROI from AI, (b) such implementations moderate negative labor displacement effects by facilitating task reallocation, and (c) governance measures reduce costly litigation/regulatory interventions and increase consumer/employee trust.

Limitations to note: conceptual-only evidence; empirical validation required to quantify economic impacts and generalizability across sectors and firm sizes.

Assessment

Paper Typetheoretical Evidence Strengthlow — Conceptual and integrative literature review with no original empirical data, no causal identification strategy, and no quantitative effect estimates; claims are plausibly theorized but not empirically validated. Methods Rigormedium — Presents a coherent, interdisciplinary synthesis and a clearly articulated five-layer framework that is useful for theory-building and practice; however, it lacks a documented systematic review protocol, empirical tests, robustness checks, or operationalization of constructs. SampleNo original dataset; an interdisciplinary synthesis of existing literature in managerial decision-making, business analytics, AI/ML governance, and organizational studies (conceptual sources and prior empirical studies cited but not re-analyzed). Themeshuman_ai_collab org_design adoption productivity skills_training governance GeneralizabilityNo empirical validation — uncertain external validity to firms or sectors, Heterogeneity across industries and business functions not assessed, Unclear applicability across firm sizes and resource constraints, Implementation complexity and costs vary widely across organizational contexts, Cross-country and regulatory differences not addressed, Worker heterogeneity (skills, roles, cultures) and union/regulatory environments not empirically considered

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
The fast spread of artificial intelligence (AI) in U.S. organizations has radically altered the managerial decision-making process. Decision Quality mixed medium managerial decision-making process (structure, speed, inputs)
0.04
AI adoption has augmented complexity, uncertainty in decision-making, and accountability stresses for managers. Decision Quality negative medium decision complexity, decision uncertainty, accountability stresses
0.04
Most organizations have difficulties converting algorithmic results into sustainable managerial decisions due to low levels of trust, lack of explanation, and poor integration between AI systems and human judgment. Decision Quality negative medium conversion of algorithmic outputs into sustainable managerial decisions; trust; explainability; system–human integration
0.04
Current literature has primarily focused on automation-based views of decision support and lacks insight into systematic human–AI coordination aided by analytics. Other negative medium coverage of topics in AI decision-support literature (automation-centric vs. human–AI coordination)
0.04
This paper outlines a Human–AI Collaborative Decision Analytics Framework integrating five overlapping layers: data, AI analytics, business analytics interpretation, human judgment, and feedback learning. Other null_result high structure/components of the proposed Human–AI Collaborative Decision Analytics Framework
0.06
The framework is depicted across organization areas with primary focus on strategic management and workforce decision-making and secondary focus on finance, operations, and marketing. Organizational Efficiency null_result high organizational domains targeted by the framework (strategic management, workforce, finance, operations, marketing)
0.06
Embedding managerial control, ethical reasoning, and contextual evaluation in AI-assisted workflows minimizes effects of algorithmic bias and automation bias and enhances workforce confidence. Ai Safety And Ethics positive low algorithmic bias, automation bias, workforce confidence
0.02
Analytics can serve as the focal interpretive intercession between AI outputs and human decision-makers, facilitating transparency, accountability, and contextual decision-making. Decision Quality positive medium transparency, accountability, contextualization in decision-making mediated by analytics
0.04
The study contributes to theory by developing a human-grounded decision analytics perspective and to practice by providing practical advice to executives and analytics leaders. Other positive high theoretical contribution (human-grounded perspective); practical guidance for executives/analytics leaders
0.06
The presented framework contributes to the responsible use of AI, productivity, and long-term economic competitiveness in the United States. Fiscal And Macroeconomic positive speculative responsible AI adoption, organizational productivity, long-term economic competitiveness (U.S.-level)
0.01

Notes