Evidence (5126 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Adoption
Remove filter
Advanced pilot implementations report schedule acceleration of around 2 months.
Reported case results from advanced pilots and implementations included in the review (single‑project/case evidence).
Advanced pilot implementations report cost savings of approximately 5%.
Case‑level results from high‑performing pilot deployments and pilot studies identified in the review.
Advanced pilot implementations report rework and logistics reductions of up to ~80%.
Quantitative figures drawn from case‑level results and advanced pilot deployments reported in the reviewed studies (not aggregated industry averages).
Functional and instrumental value of AI systems can speed organizational adoption via increased trust, implying economic importance of demonstrable productivity gains and clear ROI.
Interpretation/implication drawn from the study's empirical finding that functional/instrumental values increase initial trust and that trust positively affects adoption; this is an inference rather than a directly tested macroeconomic effect in the paper.
Destinations that invest in trustworthy AI ecosystems and credible sustainability narratives can capture greater market share, increasing competitive pressure among destinations and platforms.
Conceptual market-structure argument and literature synthesis; illustrated with Kebumen as an emergent destination example; no empirical testing offered.
AI personalization can increase demand by improving match quality between tourists and offerings, raising consumer surplus and potentially willingness-to-pay.
Theoretical economic reasoning in the AI economics section of the paper; no empirical estimates or data provided.
These effects operate largely through consumer trust in technology (digital trust) as a mediator, with destination image serving as an additional mediator between trust and behavioral intention.
Theoretical mediation model proposed in the paper based on sustainable marketing theory and prior literature; illustrated via case discussion; no empirical testing reported.
Digital experience quality, AI-driven personalization, sustainability communication, and social proof jointly shape destination image and tourists’ visit intention.
Conceptual integrative framework and literature synthesis presented in the paper; illustrated using Kebumen UNESCO Global Geopark as a case example; no primary empirical data collected.
Public funding for open models, shared compute infrastructures, and curated public datasets could counteract concentration and promote broad innovation.
Paper advocates this in 'Policy and public‑goods considerations' as a prescriptive policy option; it is a proposed mitigation rather than an empirically tested intervention in the text.
Demand for security engineers, privacy specialists, human moderators, and behavioral scientists will rise, increasing wages in these specialties and altering labor allocations in AI/VR firms.
Authors' labor‑market inference drawn from increased needs implied by TVR‑Sec implementation and literature on moderation/security demand; no labor‑market data or forecasts provided.
Platforms that credibly offer strong privacy and socio‑behavioral protections may capture user trust and monetization opportunities (e.g., enterprise, healthcare, education), making safety features a potential competitive differentiator.
Authors' market‑structure reasoning based on synthesized literature and economic theory; no empirical adoption or revenue data provided to validate this claim.
Harmonized international norms and transparency measures would reduce transaction costs, limit market fragmentation, and lower the likelihood of destabilizing arms‑race dynamics, thereby improving the environment for cross‑border investment and trade in AI.
Authors' normative/economic argumentation based on comparative findings; proposed as a policy implication rather than an empirically validated result.
Aligning domestic rules with international risk‑mitigation norms, increasing transparency in defence procurement/AI operations, and strengthening multilateral confidence measures would reduce escalation and abuse.
Authors' policy argumentation and normative reasoning based on comparative findings (not empirically tested in the paper).
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
Normative and conceptual claim from the workshop's economics discussion and design provocations; not empirically evaluated within the workshop summary.
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
Workshop recommendation emerging from co-design exercises, futures scenarios, and position papers; presented as a normative/design agenda rather than an empirically tested intervention.
Policy instruments such as open-data mandates, compute-sharing incentives, and conditionality in R&D funding can help ensure equitable validation and local engagement in climate-AI development.
Policy recommendations grounded in normative analysis and analogies to existing public-good interventions; no empirical evaluation of these specific instruments provided in the paper.
Economists should prioritize research to quantify returns to investments in CDPI versus private compute, estimate economic costs of maladaptation from biased AI outputs, and design incentive-compatible mechanisms for data sharing and co-production.
Research agenda and recommendations presented by the authors; this is a suggested empirical/theoretical program rather than a tested result.
Establishing Climate Digital Public Infrastructure (CDPI)—shared, interoperable data and compute resources, standards, and governance—can democratize access and reduce inequities in climate-AI.
Policy proposal and normative argument drawing analogies to public goods (observational networks, satellites); no empirical evaluation of CDPI implementations presented.
Shifting from a model-centric to a data-centric approach (improving data quality, representativeness, and governance) will mitigate the harms caused by current infrastructural asymmetries.
Normative recommendation grounded in conceptual arguments and illustrative examples; not supported by empirical interventions or randomized/controlled comparisons in the paper.
Policy and governance should preserve worker agency (participatory design, transparency, clear accountability) and support training and institutional mechanisms (collective bargaining, workplace representation) to negotiate value-sharing from AI productivity gains.
Normative policy recommendation by authors derived from qualitative findings (workshops with 15 UX designers) that highlighted agency and distributional concerns.
Operationally, platform designers should monitor dependency-graph structure as a systemic risk indicator for price volatility and provide integrator abstractions to encapsulate cross-cutting complexity.
Practical implication drawn from simulation findings (not a direct empirical test on production systems): hybrid integrator results and topology-dominance results motivate these recommendations; no real-world deployment data presented.
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
Procedural material modeling (Perlin noise) is a promising technique for robust policy learning and can reduce the need for extensive real-world data collection.
Implication stated in the paper's discussion: authors suggest procedural variation via Perlin noise aided robust policy learning and improved sim-to-real transfer; empirical quantification of reduced real data needs is not provided in the summary.
Perception providing the material's location inside the vial was used to guide the agent.
Paper summary states perception input (material location) was provided to the agent; sensing modality and accuracy/details of perception are not specified.
Using CFR avoids the computational and development costs of retraining T2I models to improve color fidelity, providing a lower-cost path to better color authenticity.
Paper emphasizes CFR is training-free and applies at inference, claiming improved color authenticity without model retraining; cost implication is inferred from lack of retraining (quantitative compute savings not provided in the summary).
Once trained, these simulation-trained summary networks are fast to evaluate and can be used as amortized estimators to enable large-scale counterfactuals, sensitivity analyses, and Monte Carlo-based policy evaluation with much lower per-evaluation cost.
Practical implication claim: based on amortization principle (neural network inference is fast at evaluation time) and reported ability to replace repeated runs of iterative algorithms; the summary asserts reduced per-evaluation cost but does not provide quantitative runtime benchmarks or speedup ratios in the provided text.
Organizations should consider LLM-generated feedback as a high-return, lower-cost PRF option for low-resource retrieval tasks to reduce expenses tied to corpus annotation or expensive retrieval pipelines.
Implication drawn from the paper's cost-effectiveness results (LLM-generated feedback performing well per LLM invocation cost across the evaluated BEIR tasks).
QCSC capabilities could change the economics of certain AI model classes that rely on expensive scientific simulations for training data by producing richer, cheaper training datasets.
Theoretical link between simulation output quality/cost and training-data generation for physics-informed ML and generative chemistry models; no empirical studies or cost estimates presented.
QCSC-enabled faster, higher-fidelity simulation can compress R&D cycles in chemistry and materials, lowering time-to-discovery and increasing returns to computational investment for firms.
Use-case analysis linking simulation fidelity/turnaround to R&D timelines; relies on assumed speedups and fidelity improvements but provides no measured speedup data.
The proposed approach will increase demand for edge/embedded ML expertise, GNN optimization, and HAPS integration, shifting supplier ecosystems and labor requirements.
Workforce and supply-chain implication stated in the paper's discussion of economic impacts; based on projected capabilities required to implement FL+GNN solutions, not on labor-market measurements.
FL reduces raw-data movement across jurisdictions, easing regulatory compliance for cross-border NTN services and supporting privacy-preserving business models.
Implication derived from the federated approach (local model updates vs. raw-data transfer) noted in the paper; no legal/regulatory case studies or measurements provided.
HAPS-as-aggregator creates a distributed service layer between satellites and terrestrial infrastructure, enabling new roles (HAPS operators, FL orchestration providers) and revenue streams.
Paper's market-structure implications: conceptual argument that HAPS aggregation in an FL architecture yields opportunities for new service roles and monetization; no market or revenue analysis provided.
Lightweight GNNs enable more intelligence on-board or at HAPS without requiring major hardware upgrades, potentially deferring capital expenditures (CapEx).
Economic/operational implication in the paper based on the stated compactness of the GNN model and its suitability for edge/on-board deployment; no quantified hardware or CapEx comparison provided.
Improved predictive beam selection (from the proposed GNN/FL approach) reduces link outages and retransmissions, cutting operational costs and improving user experience.
Economic implication stated in the paper linking better beam prediction/stability (experimentally observed) to reduced outages and retransmissions; no direct measurement of outages/retransmissions or operational cost savings reported in the summary.
Adopting DPS-like efficiencies reduces the marginal compute cost of online prompt-selection workflows (dominated by rollouts), thereby shortening finetuning cycles and increasing developer productivity.
Paper's implications section: logical inference from reported reduction in rollouts and rollout compute; not an empirical market study—no dollar or industry-scale numbers provided.
There is a strong complementarity between AI investments and organizational change: firms with better leadership, cross-functional processes, and data practices capture disproportionate benefits, implying increasing returns to scale and potential winner-take-most dynamics.
Authors' theoretical inference from cross-case patterns and economic reasoning; supported qualitatively by cases showing disproportionate gains in better-managed firms.
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Risk management can accelerate AI adoption by lowering uncertainty for managers and investors, thereby affecting diffusion and productivity gains from AI.
Conceptual implication derived from the review's synthesis and discussion (policy/implication section); not supported by primary empirical testing within the reviewed literature.
Firms that adopt structured risk management for AI projects can reduce model failure, operational losses, and reputational costs—improving risk-adjusted returns on AI investment.
Theoretical and practical extrapolation from general RM frameworks and thematic findings in the literature; no AI-specific primary empirical studies included in the review.
Structured risk management can produce potential cost savings via reduced loss events and more efficient capital allocation.
Reported as a benefit across some reviewed studies and practitioner reports; the review notes lack of primary empirical quantification of effect sizes.
Firms that design processes to preserve human diversity and elicit diverse AI outputs may capture greater productivity gains, increasing returns to organizational capability rather than to raw model access.
Theoretical implication and prescriptive recommendation based on observed homogenization; no direct causal firm-level evidence presented, inference based on economic reasoning.
Investments to build trust in AI (transparency, reliability, training) are likely to have positive returns via higher adoption rates and realized AI benefits.
This is presented as an implication derived from observed positive associations between trust and outcomes; the study did not conduct cost–benefit or longitudinal causal tests of such investments in the reported analyses.
Practical levers to increase AI trust include transparency of AI models, demonstrated reliability, and manager-focused AI literacy/training.
Paper proposes these levers based on study findings and discussion (recommendations), but they were not tested experimentally in the reported cross-sectional survey.
A stronger data-driven decision culture that stems from AI trust yields better operational and academic outcomes.
Study reports positive associations between AI trust → data-driven culture → operational and academic outcomes in survey-based analyses; however, the summary does not specify which operational/academic metrics were measured or sample size.
On-Premise RAG provides a viable path for SMEs sensitive to security and cost to adopt advanced language capabilities without perpetual vendor fees or data exposure.
Synthesis of technology, organizational, and environment/security analyses (TOE framework) and implications section arguing SMEs can adopt on-prem RAG; presented as an implication rather than proven adoption data.
Procurement contracts for AI systems can require staged validation (pilot, local fine-tuning) and performance-linked payments to align incentives and reduce adoption risk.
Policy recommendation drawn from procurement and incentive-design literature synthesized in the review; not an empirical claim about observed outcomes but a proposed intervention to mitigate identified risks.
Clear regulatory standards for synthetic data quality, provenance, and acceptable validation pipelines will lower transaction costs, reduce liability risk, and stimulate private-sector offerings (synthetic-data services, marketplaces).
Policy and governance analyses in the review arguing that regulatory clarity reduces uncertainty and promotes market activity; this is a policy inference supported by comparative regulatory studies rather than direct causal empirical proof specific to African markets.
The dissertation implies policy interventions (subsidies, tax incentives, training and integration assistance) can accelerate welfare-improving AI adoption by helping firms overcome the early negative part of the U-shaped profit profile.
Policy implication derived from the theoretical U-shaped profit relationship and model interpretation; not supported by randomized or quasi-experimental policy evaluation in the provided summary.