The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Engineering teams with stronger AI capabilities report higher innovation performance because AI improves decision-making quality; making algorithms more transparent amplifies that benefit.

AI meets engineering ingenuity: how AI capability enhances innovation performance through decision-making quality
Ling Xiang, Fuda Li · Fetched April 30, 2026 · Engineering Construction and Architectural Management
semantic_scholar correlational low evidence 7/10 relevance DOI Source
In a time-lagged survey of 435 engineering respondents, higher AI capability is associated with greater innovation performance, with decision-making quality mediating this relationship and algorithmic transparency strengthening the AI→decision-quality link.

This study aims to examine the relationships among AI capability, decision-making quality, and innovation performance, and to investigate the moderating effect of algorithmic transparency on the relationship between AI capability and decision-making quality. A questionnaire survey was conducted using the Credamo data platform. To reduce common method bias, a time-lagged survey design was adopted. Data on AI capability, algorithmic transparency, decision-making quality, and innovation performance were collected from 435 participants. Established scales from authoritative foreign journals were used for measurement, and appropriate translation and verification procedures were carried out. (1) AI capability is positively associated with innovation performance. Decision-making quality mediates the relationship between AI capability and innovation performance. (2) Algorithmic transparency positively moderates the relationship between AI capability and decision-making quality. This study enriches AI capability research by incorporating engineering perspectives. It extends organizational learning theory by examining how AI capability shapes decision-making processes within engineer–AI collaboration contexts, identifying decision-making quality as a mediator and algorithmic transparency as a moderator. The findings offer practical insights for construction firms to enhance innovation performance through effective AI integration while helping engineers better leverage AI tools in design and project management workflows.

Summary

Main Finding

  • AI capability in firms positively affects innovation performance, and this effect operates (at least partly) through improved decision-making quality. Algorithmic transparency strengthens the positive effect of AI capability on decision-making quality.

Key Points

  • Direct effect: Higher AI capability → higher innovation performance.
  • Mediating mechanism: Decision-making quality mediates the relationship between AI capability and innovation performance (AI capability → better decisions → better innovation outcomes).
  • Moderation: Algorithmic transparency positively moderates the AI capability → decision-making quality link (more transparent algorithms amplify the benefits of AI capability for decision quality).
  • Theoretical contribution: Integrates engineering perspectives into AI capability research and extends organizational learning theory by identifying decision-making quality as the mechanism and algorithmic transparency as a boundary condition.
  • Practical target: Findings are framed for construction firms and engineer–AI collaboration contexts (design and project management workflows).

Data & Methods

  • Data source: Questionnaire survey administered via the Credamo platform.
  • Sample: 435 participants (engineers / practitioner respondents in construction contexts).
  • Design: Time-lagged survey to reduce common method bias.
  • Measures: Established scales from authoritative journals for AI capability, algorithmic transparency, decision-making quality, and innovation performance; scales translated and verified appropriately.
  • Analysis: Mediation analysis (to test decision-making quality as mediator) and moderation analysis (to test algorithmic transparency as moderator). Results indicate significant mediation and moderation effects.
  • Limitations (noted or implied): Survey-based, observational design—time-lagging mitigates but does not eliminate endogeneity or causal inference concerns; context-specific (construction/engineer–AI settings) may limit generalizability.

Implications for AI Economics

  • Complementarities and returns to AI investment
    • Empirical evidence that AI capability raises innovation performance by improving decision quality implies positive productivity and innovation returns to AI investments when organizational decision processes can leverage AI outputs.
    • Algorithmic transparency acts as a complement that increases the marginal returns to AI capability — transparency investments can enhance the value of existing AI capital.
  • Human–AI complementarity
    • Decision-making quality as the mediating channel underscores that gains from AI depend on human agents’ ability to interpret and act on AI outputs. Labor upgrading (skills/training) and organizational routines matter for realizing economic gains.
  • Policy and governance
    • Results support policies and firm-level standards promoting algorithmic transparency: greater transparency can unlock more effective use of AI and higher innovation returns, suggesting public incentives or regulation for disclosure/interpretability may raise social returns to AI.
  • Measurement and evaluation
    • For economic assessment of AI, consider survey-based organizational-capability measures plus intermediate outcomes (decision quality) rather than only final financial metrics; this helps trace mechanisms and heterogeneity in returns.
  • Investment strategy
    • Firms should view spending on AI capability and on transparency/interpretability efforts as complementary investments. Evaluations of AI projects should account for the interactive effect of transparency on realized benefits.
  • Research implications for AI economics
    • Calls for causal and micro-level evidence linking AI capability to firm productivity and market outcomes (e.g., using experiments, longitudinal firm data, IV/DID designs).
    • Need to quantify costs of achieving algorithmic transparency and compare to expected gains in innovation/performance to assess net returns.
    • Explore heterogeneity in effects across sectors and task types to inform allocation of AI-capability investments in the economy.

Assessment

Paper Typecorrelational Evidence Strengthlow — Findings are based on cross-sectional/time-lagged self-reports from a convenience sample without random assignment or external instruments; although temporal separation reduces some common-method concerns, potential reverse causality, omitted-variable bias, and measurement bias remain, so causal interpretation is weak. Methods Rigormedium — Study uses established scales from reputable journals, translation/verification procedures, a reasonably sized sample (n=435), and a time-lagged design to mitigate common-method variance; however, reliance on a non-probability online panel (Credamo), self-reported outcomes, limited information on controls and robustness checks, and lack of objective performance measures constrain rigor. Sample435 respondents recruited via the Credamo online platform (survey targeted at engineers/participants in construction firm contexts); respondents completed validated translated scales measuring AI capability, algorithmic transparency, decision-making quality, and innovation performance in a time-lagged survey design; country/context not explicitly specified in summary. Themeshuman_ai_collab innovation IdentificationTime-lagged self-report survey with mediation and moderation tested via regression/SEM using validated scales; no experimental or quasi-experimental source of exogenous variation, therefore causal claims rest on temporal separation, control variables, and statistical mediation rather than identification from an exogenous shock or random assignment. GeneralizabilityConvenience sample from an online panel (Credamo) — not a probability sample, Likely concentrated in construction/engineering contexts—may not generalize to other industries, Probable single-country/cultural context (not specified) limits international generalizability, Self-reported measures of innovation performance and decision quality may not reflect objective outcomes, Cross-sectional/time-lagged design limits inference about long-run effects and dynamics

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
AI capability is positively associated with innovation performance. Innovation Output positive high innovation performance
n=435
0.3
Decision-making quality mediates the relationship between AI capability and innovation performance. Innovation Output positive high innovation performance
n=435
0.3
Algorithmic transparency positively moderates the relationship between AI capability and decision-making quality. Decision Quality positive high decision-making quality
n=435
0.3
Data on AI capability, algorithmic transparency, decision-making quality, and innovation performance were collected from 435 participants using the Credamo data platform. Other null_result high other
n=435
0.5
A time-lagged survey design was adopted to reduce common method bias. Other null_result high other
n=435
0.15
Established scales from authoritative foreign journals were used for measurement, with appropriate translation and verification procedures carried out. Other null_result high other
n=435
0.5
The findings offer practical insights for construction firms to enhance innovation performance through effective AI integration and help engineers better leverage AI tools in design and project management workflows. Innovation Output positive high innovation performance
n=435
0.05
This study enriches AI capability research by incorporating engineering perspectives and extends organizational learning theory by examining how AI capability shapes decision-making processes within engineer–AI collaboration contexts. Other null_result high other
n=435
0.05

Notes