The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Firms that adopt generative AI report better decision quality and greater strategic agility, and these gains are linked to stronger organizational performance; however, benefits depend on company culture and technological readiness and the evidence is associative rather than causal.

The Strategic Impact of Generative Artificial Intelligence on Organizational Decision Making
Muhammad Nawaz Khan · March 22, 2026 · Bolan International Journal of Research Insights (BIJRI)
openalex correlational low evidence 7/10 relevance DOI Source PDF
Using a multi-industry survey and PLS-SEM, the paper finds that firms reporting generative AI adoption also report higher decision quality and strategic agility, which are associated with improved organizational performance, with effects moderated by organizational culture and technological readiness.

Generative Artificial Intelligence (AI) has emerged as a transformative technology with profound implications for organizational decision-making processes. Unlike traditional AI systems, generative AI can autonomously produce novel content, including text, images, models, and scenarios, enabling organizations to analyze complex data, forecast trends, and simulate strategic alternatives. Its application spans business intelligence, strategic planning, risk assessment, marketing, and innovation management. While generative AI offers substantial benefits for improving decision quality, speed, and efficiency, it also raises challenges related to trust, interpretability, and ethical deployment within corporate environments. Organizational decision-making is increasingly data-driven, and generative AI systems facilitate the synthesis of structured and unstructured information from diverse sources. These systems enable managers to explore multiple decision pathways, identify potential risks, and optimize strategic choices. Furthermore, generative AI can augment human creativity by producing innovative solutions and scenario planning alternatives that may not emerge through conventional analytical approaches. However, reliance on automated content generation also introduces risks of cognitive overreliance, algorithmic bias, and strategic misalignment. This study examines the strategic impact of generative AI on organizational decision-making by developing a conceptual framework that investigates the relationships between generative AI adoption, decision quality, strategic agility, and organizational performance. Empirical data were collected from senior managers, decision-makers, and AI adoption specialists across multiple industries. Structural Equation Modeling using Smart Partial Least Squares was applied to assess the relationships between constructs. The results indicate that generative AI adoption significantly enhances decision quality and strategic agility, which in turn positively influence organizational performance. Moreover, organizational culture and technological readiness moderate the effectiveness of generative AI integration in decision-making processes. This study contributes to literature on AI-driven strategic management by providing empirical evidence of generative AI’s role in shaping organizational decision-making outcomes and offering actionable insights for successful AI integration in corporate strategy.

Summary

Main Finding

Generative AI adoption significantly improves organizational decision quality and strategic agility, and these improvements in turn positively affect organizational performance. Organizational culture and technological readiness moderate how effectively generative AI translates into better decisions and greater agility. (Survey-based evidence from 190 senior managers, decision-makers, and AI specialists; analysis via PLS-SEM.)

Key Points

  • Scope: Examines strategic impact of generative AI (e.g., foundation models, automated scenario generation) on strategic decision-making, using a conceptual model grounded in Technology Acceptance, Resource-Based View, and Strategic Decision-Making theory.
  • Core constructs: Generative AI Adoption (GAA), Decision Quality (DQ), Strategic Agility (SA), Organizational Performance (OP); moderators: Organizational Culture (OC) and Technological Readiness (TR).
  • Hypotheses tested:
    • H1: GAA → Decision Quality (positive)
    • H2: GAA → Strategic Agility (positive)
    • H3: Decision Quality → Organizational Performance (positive)
    • H4: Strategic Agility → Organizational Performance (positive)
    • H5: Organizational Culture moderates GAA → Decision Quality
    • H6: Technological Readiness moderates GAA → Strategic Agility
  • Empirical result summary: All main hypotheses supported—generative AI adoption correlates with higher decision quality and agility, which are associated with improved performance. Organizational culture and technological readiness amplify these effects.
  • Measurement reliability/validity:
    • Cronbach’s alpha: 0.86–0.90 across constructs (all > 0.70).
    • Composite reliability: ~0.90–0.93 (strong).
    • AVE: 0.65–0.71 (convergent validity).
    • Fornell-Larcker: square roots of AVE (0.812–0.843) exceed inter-construct correlations.
    • HTMT: all values 0.57–0.74 (< 0.85), supporting discriminant validity.
  • Caveats noted by authors: cross-sectional design, no industry-specific longitudinal causal estimates, and ethical/interpretability risks that require governance.

Data & Methods

  • Data: Online structured questionnaire; target sample = senior managers, decision-makers, AI adoption specialists across multiple industries. 250 questionnaires distributed → 190 valid responses used.
  • Measurement: 5-point Likert items adapted from prior AI adoption and strategic management literature.
  • Analysis:
    • SmartPLS (PLS-SEM) used to evaluate measurement and structural models.
    • Reliability metrics: Cronbach’s alpha, composite reliability.
    • Validity: AVE, Fornell-Larcker criterion, HTMT ratios.
    • Structural assessment: path coefficients, t-values, R² (authors report significant paths and meaningful R² but do not report specific coefficient values or p-values in the excerpt).
  • Study limitations: non-experimental cross-sectional survey (limits causal inference), potential self-reporting / common-method bias (not explicitly detailed), sample size moderate (N=190).

Implications for AI Economics

  • Productivity and firm performance:
    • Evidence that generative AI can be a firm-level productivity enhancer by improving decision accuracy and speed and by increasing strategic agility—suggesting positive returns to AI investments conditional on organizational readiness.
    • Heterogeneous returns likely: benefits depend on organizational culture and technological readiness (complementarities with human capital and infrastructure).
  • Complementarity vs. substitution:
    • Results support AI as a complementary strategic asset (augmenting managerial cognition and scenario analysis). This implies potential demand for higher-skilled managerial and technical labor rather than pure labor displacement in decision-intensive roles—though substitution risks remain in routine tasks.
  • Adoption constraints and diffusion:
    • Technological readiness and culture are binding constraints on capture of AI gains—policy and firm strategy that lower adoption frictions (infrastructure, training, governance frameworks) will raise realized economic gains and speed diffusion.
  • Market structure & competition:
    • Firms that successfully integrate generative AI—especially those with stronger culture/infrastructure—may obtain sustained competitive advantages, potentially increasing industry concentration. Economists should monitor whether AI adoption contributes to persistent productivity gaps across firms and to market power dynamics.
  • Measurement & causal inference needs in AI economics:
    • Cross-sectional survey evidence is useful but insufficient for causal claims about productivity/market effects. Economic research should prioritize:
      • Longitudinal and panel datasets to measure within-firm changes in performance pre/post-AI adoption.
      • Quasi-experimental designs (difference-in-differences, instrumental variables, regression discontinuity) to identify causal returns.
      • Fine-grained measures of decision quality and strategic outcomes that map to economic performance (revenues, margins, innovation outputs).
  • Externalities, governance costs, and biases:
    • Generative AI risks (algorithmic bias, opacity, ethical issues) impose potential governance and reputational costs that offset some benefits. Economic models should incorporate these internalization costs, compliance expenses, and potential regulatory impacts.
  • Policy and managerial implications:
    • For policymakers: promote digital infrastructure, workforce upskilling, data access frameworks, and responsible-AI governance to amplify social returns of AI adoption.
    • For firms: invest in technological readiness, develop cultures that support experimentation and interpretability, and set governance to mitigate bias and overreliance.
  • Research agenda for AI economists:
    • Quantify heterogeneity in returns by industry, firm size, and data endowments.
    • Estimate complementarity between generative AI, data assets, and managerial skills.
    • Assess macroeconomic implications: labor market transitions, productivity dispersion, and welfare effects of large-scale generative AI diffusion.

If you want, I can (a) extract and format the measurement and validity tables as compact tables, (b) draft potential empirical strategies to estimate causal returns to generative AI using firm-level administrative or financial data, or (c) produce a one-page policy brief synthesizing these implications. Which would be most useful?

Assessment

Paper Typecorrelational Evidence Strengthlow — Findings are based on self-reported, cross-sectional survey data and PLS-SEM, which identify associations but do not establish causal effects; risks include common-method bias, reverse causality, omitted variable bias, and potential sample selection that weaken causal inference. Methods Rigormedium — The study applies accepted multivariate techniques (PLS-SEM) and tests measurement properties and moderating relationships, which is methodologically appropriate for theory testing; however, reliance on cross-sectional self-reports, limited information about sampling frame/size, and absence of strategies to address endogeneity lower overall rigor. SampleCross-sectional survey respondents were senior managers, decision-makers, and AI adoption specialists drawn from multiple industries; measures are self-reported perceptions of generative AI adoption, decision quality, strategic agility, organizational culture, technological readiness, and firm performance; sampling frame, recruitment method, sample size, and geographic scope are not specified in the summary. Themesorg_design productivity human_ai_collab adoption IdentificationCross-sectional survey of senior managers and AI adoption specialists; uses Partial Least Squares Structural Equation Modeling (PLS-SEM) to estimate associations and moderation effects between generative AI adoption, decision quality, strategic agility, and firm performance; identification rests on statistical controls, measurement model validity, and model fit rather than exogenous variation, randomization, or longitudinal/exogenous shocks. GeneralizabilitySelf-reported perceptions may not map to objective productivity/performance metrics, Limited to senior manager respondents and AI specialists — excludes lower-level employees and frontline production contexts, Potential sector, firm-size, or country sampling bias (geographic/sampling frame unspecified), Cross-sectional design precludes temporal generalization about long-term impacts, Convenience or non-random sampling likely limits representativeness

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
Generative AI adoption significantly enhances decision quality. Decision Quality positive high decision quality
0.3
Generative AI adoption significantly enhances strategic agility. Organizational Efficiency positive high strategic agility
0.3
Decision quality and strategic agility positively influence organizational performance. Firm Productivity positive high organizational performance
0.3
Organizational culture and technological readiness moderate the effectiveness of generative AI integration in decision-making processes. Organizational Efficiency mixed high effectiveness of generative AI integration in decision-making (moderation effect)
0.3
Generative AI augments human creativity by producing innovative solutions and scenario-planning alternatives that may not emerge through conventional analytical approaches. Creativity positive high augmentation of human creativity / production of innovative solutions and scenarios
0.15
Reliance on automated content generation introduces risks of cognitive overreliance, algorithmic bias, and strategic misalignment. Ai Safety And Ethics negative high risks to decision-making including cognitive overreliance, algorithmic bias, strategic misalignment
0.15
Generative AI facilitates the synthesis of structured and unstructured information from diverse sources, enabling managers to explore multiple decision pathways, identify potential risks, and optimize strategic choices. Decision Quality positive high ability to synthesize information and support exploration of decision pathways (decision-making capability)
0.15
Generative AI can autonomously produce novel content, including text, images, models, and scenarios. Other positive high autonomous generation of novel content (text, images, models, scenarios)
0.05

Notes