Evidence (4560 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Productivity
Remove filter
Widespread adoption of validated predictive models and curated multi‑omics datasets will shift R&D costs and productivity in biotech/pharma—reducing marginal costs of experiments, shortening timelines, and increasing returns to high‑quality data and models.
Economic analysis and inferred implications from reported improvements in in silico screening, diagnostics, and prognostics; no empirical R&D cost study provided in summary (conceptual projection).
The program can reduce skill mismatches and increase effective labor supply in targeted sectors, altering relative demand for AI-complementary vs. AI-substitutable tasks.
Economic argument in paper (theoretical); no empirical tests or sample reported.
Better-aligned curricula can raise the productivity and employability of graduates, shifting returns to human capital and affecting wage distribution by skill.
Theoretical economic reasoning and program rationale presented in paper; no empirical causal evidence provided.
Advantages of the program include traceability, improved career-alignment and employability, audit readiness, and support for innovation through modelling and data analysis.
Paper lists these as intended advantages (asserted benefits); no empirical outcome data provided.
Regulation and workforce policy should be calibrated to interaction level: stronger oversight and validation for AI-augmented/automated systems and workforce policies (reskilling, credentialing) to manage transition to Human+ roles.
Policy recommendations based on the taxonomy and implications drawn from the four qualitative case studies and conceptual analysis.
Reduced processing times and better cash-flow visibility lower working-capital requirements and financing costs for EPC firms.
Economic implication drawn in the paper from reported KPI improvements (processing time, cash-flow visibility). This is inferential/analytical rather than directly measured in the reported pilots; no quantified finance metrics (e.g., working-capital reduction in currency or interest saved) were provided.
Practitioners should combine the manufacturing operation tree with AI methods and real operational data to create validated, policy‑aware simulation tools that support economic decision making.
Practical guidance and proposed integration steps in the paper; presented as recommended practice rather than demonstrated case examples.
The proposed roadmap can produce simulations that are realistic, validated against industry data, and useful for decision makers—supporting agility, resilience, and data‑driven planning.
Conceptual roadmap and recommendations in the paper; no empirical demonstrations or validation studies included.
Policy implication: develop data governance, interoperability, and safeguards to encourage public–private collaboration while protecting smallholders.
Authors' policy recommendation informed by thematic findings on governance and inclusion challenges in the review.
Policy implication: prioritize funding for localized AI solutions (context-specific models, language/extension support) and rural digital infrastructure (connectivity, data platforms, stable electricity).
Authors' recommendations based on synthesis of barriers, enabling factors, and observed impacts in the reviewed literature.
Advanced pilot implementations report maintenance cost reductions of 10–25%.
Maintenance cost outcomes reported in case studies and pilot implementations contained in the review.
Advanced pilot implementations report energy reductions in the range 15–30%.
Energy performance figures taken from selected high‑performing pilot cases and deployments in the reviewed literature.
Advanced pilot implementations report schedule acceleration of around 2 months.
Reported case results from advanced pilots and implementations included in the review (single‑project/case evidence).
Advanced pilot implementations report cost savings of approximately 5%.
Case‑level results from high‑performing pilot deployments and pilot studies identified in the review.
Advanced pilot implementations report rework and logistics reductions of up to ~80%.
Quantitative figures drawn from case‑level results and advanced pilot deployments reported in the reviewed studies (not aggregated industry averages).
Functional and instrumental value of AI systems can speed organizational adoption via increased trust, implying economic importance of demonstrable productivity gains and clear ROI.
Interpretation/implication drawn from the study's empirical finding that functional/instrumental values increase initial trust and that trust positively affects adoption; this is an inference rather than a directly tested macroeconomic effect in the paper.
Public funding for open models, shared compute infrastructures, and curated public datasets could counteract concentration and promote broad innovation.
Paper advocates this in 'Policy and public‑goods considerations' as a prescriptive policy option; it is a proposed mitigation rather than an empirically tested intervention in the text.
Policy instruments such as open-data mandates, compute-sharing incentives, and conditionality in R&D funding can help ensure equitable validation and local engagement in climate-AI development.
Policy recommendations grounded in normative analysis and analogies to existing public-good interventions; no empirical evaluation of these specific instruments provided in the paper.
Economists should prioritize research to quantify returns to investments in CDPI versus private compute, estimate economic costs of maladaptation from biased AI outputs, and design incentive-compatible mechanisms for data sharing and co-production.
Research agenda and recommendations presented by the authors; this is a suggested empirical/theoretical program rather than a tested result.
Establishing Climate Digital Public Infrastructure (CDPI)—shared, interoperable data and compute resources, standards, and governance—can democratize access and reduce inequities in climate-AI.
Policy proposal and normative argument drawing analogies to public goods (observational networks, satellites); no empirical evaluation of CDPI implementations presented.
Shifting from a model-centric to a data-centric approach (improving data quality, representativeness, and governance) will mitigate the harms caused by current infrastructural asymmetries.
Normative recommendation grounded in conceptual arguments and illustrative examples; not supported by empirical interventions or randomized/controlled comparisons in the paper.
Operationally, platform designers should monitor dependency-graph structure as a systemic risk indicator for price volatility and provide integrator abstractions to encapsulate cross-cutting complexity.
Practical implication drawn from simulation findings (not a direct empirical test on production systems): hybrid integrator results and topology-dominance results motivate these recommendations; no real-world deployment data presented.
Procedural material modeling (Perlin noise) is a promising technique for robust policy learning and can reduce the need for extensive real-world data collection.
Implication stated in the paper's discussion: authors suggest procedural variation via Perlin noise aided robust policy learning and improved sim-to-real transfer; empirical quantification of reduced real data needs is not provided in the summary.
Perception providing the material's location inside the vial was used to guide the agent.
Paper summary states perception input (material location) was provided to the agent; sensing modality and accuracy/details of perception are not specified.
Surrogate-accelerated workflows reduce energy consumption and carbon footprint per discovery because they require fewer expensive evaluations.
Stated implication in the paper linking fewer expensive quantum-chemistry/DFT evaluations to lower energy use; no measured energy/emissions data provided in the summary.
Order-of-magnitude reductions in expensive evaluations enable faster R&D cycles and higher throughput for exploration of potential-energy landscapes in materials science, catalysis, and drug design.
Policy/economic implication argued in the paper based on empirical reductions in expensive evaluations; no direct time-to-discovery experiments reported in the summary.
Organizations should consider LLM-generated feedback as a high-return, lower-cost PRF option for low-resource retrieval tasks to reduce expenses tied to corpus annotation or expensive retrieval pipelines.
Implication drawn from the paper's cost-effectiveness results (LLM-generated feedback performing well per LLM invocation cost across the evaluated BEIR tasks).
QCSC capabilities could change the economics of certain AI model classes that rely on expensive scientific simulations for training data by producing richer, cheaper training datasets.
Theoretical link between simulation output quality/cost and training-data generation for physics-informed ML and generative chemistry models; no empirical studies or cost estimates presented.
QCSC-enabled faster, higher-fidelity simulation can compress R&D cycles in chemistry and materials, lowering time-to-discovery and increasing returns to computational investment for firms.
Use-case analysis linking simulation fidelity/turnaround to R&D timelines; relies on assumed speedups and fidelity improvements but provides no measured speedup data.
The proposed approach will increase demand for edge/embedded ML expertise, GNN optimization, and HAPS integration, shifting supplier ecosystems and labor requirements.
Workforce and supply-chain implication stated in the paper's discussion of economic impacts; based on projected capabilities required to implement FL+GNN solutions, not on labor-market measurements.
FL reduces raw-data movement across jurisdictions, easing regulatory compliance for cross-border NTN services and supporting privacy-preserving business models.
Implication derived from the federated approach (local model updates vs. raw-data transfer) noted in the paper; no legal/regulatory case studies or measurements provided.
HAPS-as-aggregator creates a distributed service layer between satellites and terrestrial infrastructure, enabling new roles (HAPS operators, FL orchestration providers) and revenue streams.
Paper's market-structure implications: conceptual argument that HAPS aggregation in an FL architecture yields opportunities for new service roles and monetization; no market or revenue analysis provided.
Lightweight GNNs enable more intelligence on-board or at HAPS without requiring major hardware upgrades, potentially deferring capital expenditures (CapEx).
Economic/operational implication in the paper based on the stated compactness of the GNN model and its suitability for edge/on-board deployment; no quantified hardware or CapEx comparison provided.
Improved predictive beam selection (from the proposed GNN/FL approach) reduces link outages and retransmissions, cutting operational costs and improving user experience.
Economic implication stated in the paper linking better beam prediction/stability (experimentally observed) to reduced outages and retransmissions; no direct measurement of outages/retransmissions or operational cost savings reported in the summary.
Adopting DPS-like efficiencies reduces the marginal compute cost of online prompt-selection workflows (dominated by rollouts), thereby shortening finetuning cycles and increasing developer productivity.
Paper's implications section: logical inference from reported reduction in rollouts and rollout compute; not an empirical market study—no dollar or industry-scale numbers provided.
There is a strong complementarity between AI investments and organizational change: firms with better leadership, cross-functional processes, and data practices capture disproportionate benefits, implying increasing returns to scale and potential winner-take-most dynamics.
Authors' theoretical inference from cross-case patterns and economic reasoning; supported qualitatively by cases showing disproportionate gains in better-managed firms.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Risk management can accelerate AI adoption by lowering uncertainty for managers and investors, thereby affecting diffusion and productivity gains from AI.
Conceptual implication derived from the review's synthesis and discussion (policy/implication section); not supported by primary empirical testing within the reviewed literature.
Firms that adopt structured risk management for AI projects can reduce model failure, operational losses, and reputational costs—improving risk-adjusted returns on AI investment.
Theoretical and practical extrapolation from general RM frameworks and thematic findings in the literature; no AI-specific primary empirical studies included in the review.
Structured risk management can produce potential cost savings via reduced loss events and more efficient capital allocation.
Reported as a benefit across some reviewed studies and practitioner reports; the review notes lack of primary empirical quantification of effect sizes.
Firms that design processes to preserve human diversity and elicit diverse AI outputs may capture greater productivity gains, increasing returns to organizational capability rather than to raw model access.
Theoretical implication and prescriptive recommendation based on observed homogenization; no direct causal firm-level evidence presented, inference based on economic reasoning.
Investments to build trust in AI (transparency, reliability, training) are likely to have positive returns via higher adoption rates and realized AI benefits.
This is presented as an implication derived from observed positive associations between trust and outcomes; the study did not conduct cost–benefit or longitudinal causal tests of such investments in the reported analyses.
Practical levers to increase AI trust include transparency of AI models, demonstrated reliability, and manager-focused AI literacy/training.
Paper proposes these levers based on study findings and discussion (recommendations), but they were not tested experimentally in the reported cross-sectional survey.
A stronger data-driven decision culture that stems from AI trust yields better operational and academic outcomes.
Study reports positive associations between AI trust → data-driven culture → operational and academic outcomes in survey-based analyses; however, the summary does not specify which operational/academic metrics were measured or sample size.
On-Premise RAG provides a viable path for SMEs sensitive to security and cost to adopt advanced language capabilities without perpetual vendor fees or data exposure.
Synthesis of technology, organizational, and environment/security analyses (TOE framework) and implications section arguing SMEs can adopt on-prem RAG; presented as an implication rather than proven adoption data.
The dissertation implies policy interventions (subsidies, tax incentives, training and integration assistance) can accelerate welfare-improving AI adoption by helping firms overcome the early negative part of the U-shaped profit profile.
Policy implication derived from the theoretical U-shaped profit relationship and model interpretation; not supported by randomized or quasi-experimental policy evaluation in the provided summary.
Vendors that embed robust cognitive interlocks into development platforms can command premium pricing by reducing downstream risk; verification features may become a competitive moat.
Market-structure and product-differentiation reasoning in the paper; no market data, pricing studies, or competitive analyses presented.