Evidence (8269 claims)
Adoption
5674 claims
Productivity
4951 claims
Governance
4451 claims
Human-AI Collaboration
3529 claims
Labor Markets
2705 claims
Innovation
2619 claims
Org Design
2574 claims
Skills & Training
2060 claims
Inequality
1399 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 433 | 117 | 68 | 490 | 1124 |
| Governance & Regulation | 434 | 207 | 125 | 65 | 848 |
| Research Productivity | 268 | 100 | 34 | 303 | 710 |
| Organizational Efficiency | 421 | 100 | 73 | 43 | 641 |
| Technology Adoption Rate | 329 | 130 | 75 | 42 | 581 |
| Firm Productivity | 312 | 38 | 70 | 12 | 437 |
| Output Quality | 264 | 74 | 27 | 30 | 395 |
| AI Safety & Ethics | 121 | 182 | 46 | 27 | 378 |
| Market Structure | 111 | 129 | 85 | 14 | 344 |
| Decision Quality | 177 | 78 | 40 | 19 | 318 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 78 | 9 | 200 |
| Skill Acquisition | 100 | 36 | 41 | 9 | 186 |
| Innovation Output | 122 | 12 | 26 | 13 | 174 |
| Firm Revenue | 98 | 36 | 24 | — | 158 |
| Consumer Welfare | 77 | 35 | 37 | 7 | 156 |
| Task Allocation | 92 | 17 | 35 | 8 | 153 |
| Inequality Measures | 25 | 78 | 33 | 5 | 141 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 91 | 7 | 4 | 3 | 105 |
| Error Rate | 45 | 53 | 6 | — | 104 |
| Training Effectiveness | 59 | 13 | 12 | 16 | 101 |
| Worker Satisfaction | 47 | 34 | 11 | 7 | 99 |
| Wages & Compensation | 55 | 15 | 20 | 5 | 95 |
| Team Performance | 50 | 13 | 15 | 8 | 87 |
| Automation Exposure | 28 | 28 | 11 | 7 | 77 |
| Job Displacement | 7 | 40 | 13 | — | 60 |
| Hiring & Recruitment | 40 | 4 | 7 | 3 | 54 |
| Developer Productivity | 38 | 4 | 4 | 3 | 49 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 17 | 8 | 6 | 1 | 32 |
| Skill Obsolescence | 3 | 23 | 2 | — | 28 |
| Labor Share of Income | 12 | 6 | 10 | — | 28 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Destinations that invest in trustworthy AI ecosystems and credible sustainability narratives can capture greater market share, increasing competitive pressure among destinations and platforms.
Conceptual market-structure argument and literature synthesis; illustrated with Kebumen as an emergent destination example; no empirical testing offered.
AI personalization can increase demand by improving match quality between tourists and offerings, raising consumer surplus and potentially willingness-to-pay.
Theoretical economic reasoning in the AI economics section of the paper; no empirical estimates or data provided.
These effects operate largely through consumer trust in technology (digital trust) as a mediator, with destination image serving as an additional mediator between trust and behavioral intention.
Theoretical mediation model proposed in the paper based on sustainable marketing theory and prior literature; illustrated via case discussion; no empirical testing reported.
Digital experience quality, AI-driven personalization, sustainability communication, and social proof jointly shape destination image and tourists’ visit intention.
Conceptual integrative framework and literature synthesis presented in the paper; illustrated using Kebumen UNESCO Global Geopark as a case example; no primary empirical data collected.
Public funding for open models, shared compute infrastructures, and curated public datasets could counteract concentration and promote broad innovation.
Paper advocates this in 'Policy and public‑goods considerations' as a prescriptive policy option; it is a proposed mitigation rather than an empirically tested intervention in the text.
Demand for security engineers, privacy specialists, human moderators, and behavioral scientists will rise, increasing wages in these specialties and altering labor allocations in AI/VR firms.
Authors' labor‑market inference drawn from increased needs implied by TVR‑Sec implementation and literature on moderation/security demand; no labor‑market data or forecasts provided.
Platforms that credibly offer strong privacy and socio‑behavioral protections may capture user trust and monetization opportunities (e.g., enterprise, healthcare, education), making safety features a potential competitive differentiator.
Authors' market‑structure reasoning based on synthesized literature and economic theory; no empirical adoption or revenue data provided to validate this claim.
Harmonized international norms and transparency measures would reduce transaction costs, limit market fragmentation, and lower the likelihood of destabilizing arms‑race dynamics, thereby improving the environment for cross‑border investment and trade in AI.
Authors' normative/economic argumentation based on comparative findings; proposed as a policy implication rather than an empirically validated result.
Aligning domestic rules with international risk‑mitigation norms, increasing transparency in defence procurement/AI operations, and strengthening multilateral confidence measures would reduce escalation and abuse.
Authors' policy argumentation and normative reasoning based on comparative findings (not empirically tested in the paper).
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
Normative and conceptual claim from the workshop's economics discussion and design provocations; not empirically evaluated within the workshop summary.
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
Workshop recommendation emerging from co-design exercises, futures scenarios, and position papers; presented as a normative/design agenda rather than an empirically tested intervention.
Policy instruments such as open-data mandates, compute-sharing incentives, and conditionality in R&D funding can help ensure equitable validation and local engagement in climate-AI development.
Policy recommendations grounded in normative analysis and analogies to existing public-good interventions; no empirical evaluation of these specific instruments provided in the paper.
Economists should prioritize research to quantify returns to investments in CDPI versus private compute, estimate economic costs of maladaptation from biased AI outputs, and design incentive-compatible mechanisms for data sharing and co-production.
Research agenda and recommendations presented by the authors; this is a suggested empirical/theoretical program rather than a tested result.
Establishing Climate Digital Public Infrastructure (CDPI)—shared, interoperable data and compute resources, standards, and governance—can democratize access and reduce inequities in climate-AI.
Policy proposal and normative argument drawing analogies to public goods (observational networks, satellites); no empirical evaluation of CDPI implementations presented.
Shifting from a model-centric to a data-centric approach (improving data quality, representativeness, and governance) will mitigate the harms caused by current infrastructural asymmetries.
Normative recommendation grounded in conceptual arguments and illustrative examples; not supported by empirical interventions or randomized/controlled comparisons in the paper.
Policy and governance should preserve worker agency (participatory design, transparency, clear accountability) and support training and institutional mechanisms (collective bargaining, workplace representation) to negotiate value-sharing from AI productivity gains.
Normative policy recommendation by authors derived from qualitative findings (workshops with 15 UX designers) that highlighted agency and distributional concerns.
THETA outputs can be used to create domain-tailored textual covariates (e.g., narrative indices, topic intensity) for regressions or forecasting, provided researchers validate outputs with human coding and sensitivity checks.
Practical recommendation and implication for economists in the discussion; not an empirical claim directly tested in the reported experiments.
THETA can surface domain-specific frames, stakeholder positions, and emergent arguments from large comment corpora or filings, assisting policy and regulatory analysis.
Stated implication and example applications (regulatory comment corpora, filings); no direct case-study results or downstream policy-analytic validations included in the summary.
THETA's DAFT plus the agent workflow reduces the marginal cost of coding and classification, making large-N qualitative analysis more feasible.
Argued implication based on use of parameter-efficient LoRA and human-in-the-loop agent design; no cost analyses, time studies, or economic comparisons provided in the summary.
Operationally, platform designers should monitor dependency-graph structure as a systemic risk indicator for price volatility and provide integrator abstractions to encapsulate cross-cutting complexity.
Practical implication drawn from simulation findings (not a direct empirical test on production systems): hybrid integrator results and topology-dominance results motivate these recommendations; no real-world deployment data presented.
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
Procedural material modeling (Perlin noise) is a promising technique for robust policy learning and can reduce the need for extensive real-world data collection.
Implication stated in the paper's discussion: authors suggest procedural variation via Perlin noise aided robust policy learning and improved sim-to-real transfer; empirical quantification of reduced real data needs is not provided in the summary.
Perception providing the material's location inside the vial was used to guide the agent.
Paper summary states perception input (material location) was provided to the agent; sensing modality and accuracy/details of perception are not specified.
Privacy-preserving accountability logs can support ex post adjudication, insurance products, and reputational dynamics, reducing moral hazard.
Conceptual claim: protocol includes privacy-minded logs; paper argues potential for post-hoc review and insurance. No empirical tests of adjudication or insurance products provided.
Observable capability and coordination-risk signals enable more granular pricing, risk-based contracts, and differentiated service tiers (e.g., primary-only vs primary+auditor).
Policy/economic implication argued conceptually in the paper; no empirical pricing experiments or market data provided.
High capability profiles for some tasks will shift delegation toward agents (automation) and reallocate human labor toward supervision, auditing, and low-win-rate tasks.
Projection based on capability profiles and economic reasoning in the paper; presented as implications rather than empirically demonstrated. No labor-market empirical data provided.
Better matching of tasks to agent competencies improves allocative efficiency across task markets.
Theoretical/economic claim derived from capability profiles enabling improved matching; no empirical market experiments or measurements reported in the summary (field experiments suggested as future work).
Task-aware signals reduce search and screening costs by acting like quality/reliability metrics in delegation markets.
Economic implication argued conceptually in the paper: task-conditioned capability and coordination-risk signals function as observable quality metrics, reducing transaction costs. This is a theoretical argument; no empirical market-level test reported.
Using CFR avoids the computational and development costs of retraining T2I models to improve color fidelity, providing a lower-cost path to better color authenticity.
Paper emphasizes CFR is training-free and applies at inference, claiming improved color authenticity without model retraining; cost implication is inferred from lack of retraining (quantitative compute savings not provided in the summary).
Once trained, these simulation-trained summary networks are fast to evaluate and can be used as amortized estimators to enable large-scale counterfactuals, sensitivity analyses, and Monte Carlo-based policy evaluation with much lower per-evaluation cost.
Practical implication claim: based on amortization principle (neural network inference is fast at evaluation time) and reported ability to replace repeated runs of iterative algorithms; the summary asserts reduced per-evaluation cost but does not provide quantitative runtime benchmarks or speedup ratios in the provided text.
Surrogate-accelerated workflows reduce energy consumption and carbon footprint per discovery because they require fewer expensive evaluations.
Stated implication in the paper linking fewer expensive quantum-chemistry/DFT evaluations to lower energy use; no measured energy/emissions data provided in the summary.
Order-of-magnitude reductions in expensive evaluations enable faster R&D cycles and higher throughput for exploration of potential-energy landscapes in materials science, catalysis, and drug design.
Policy/economic implication argued in the paper based on empirical reductions in expensive evaluations; no direct time-to-discovery experiments reported in the summary.
Organizations should consider LLM-generated feedback as a high-return, lower-cost PRF option for low-resource retrieval tasks to reduce expenses tied to corpus annotation or expensive retrieval pipelines.
Implication drawn from the paper's cost-effectiveness results (LLM-generated feedback performing well per LLM invocation cost across the evaluated BEIR tasks).
QCSC capabilities could change the economics of certain AI model classes that rely on expensive scientific simulations for training data by producing richer, cheaper training datasets.
Theoretical link between simulation output quality/cost and training-data generation for physics-informed ML and generative chemistry models; no empirical studies or cost estimates presented.
QCSC-enabled faster, higher-fidelity simulation can compress R&D cycles in chemistry and materials, lowering time-to-discovery and increasing returns to computational investment for firms.
Use-case analysis linking simulation fidelity/turnaround to R&D timelines; relies on assumed speedups and fidelity improvements but provides no measured speedup data.
The proposed approach will increase demand for edge/embedded ML expertise, GNN optimization, and HAPS integration, shifting supplier ecosystems and labor requirements.
Workforce and supply-chain implication stated in the paper's discussion of economic impacts; based on projected capabilities required to implement FL+GNN solutions, not on labor-market measurements.
FL reduces raw-data movement across jurisdictions, easing regulatory compliance for cross-border NTN services and supporting privacy-preserving business models.
Implication derived from the federated approach (local model updates vs. raw-data transfer) noted in the paper; no legal/regulatory case studies or measurements provided.
HAPS-as-aggregator creates a distributed service layer between satellites and terrestrial infrastructure, enabling new roles (HAPS operators, FL orchestration providers) and revenue streams.
Paper's market-structure implications: conceptual argument that HAPS aggregation in an FL architecture yields opportunities for new service roles and monetization; no market or revenue analysis provided.
Lightweight GNNs enable more intelligence on-board or at HAPS without requiring major hardware upgrades, potentially deferring capital expenditures (CapEx).
Economic/operational implication in the paper based on the stated compactness of the GNN model and its suitability for edge/on-board deployment; no quantified hardware or CapEx comparison provided.
Improved predictive beam selection (from the proposed GNN/FL approach) reduces link outages and retransmissions, cutting operational costs and improving user experience.
Economic implication stated in the paper linking better beam prediction/stability (experimentally observed) to reduced outages and retransmissions; no direct measurement of outages/retransmissions or operational cost savings reported in the summary.
Adopting DPS-like efficiencies reduces the marginal compute cost of online prompt-selection workflows (dominated by rollouts), thereby shortening finetuning cycles and increasing developer productivity.
Paper's implications section: logical inference from reported reduction in rollouts and rollout compute; not an empirical market study—no dollar or industry-scale numbers provided.
There is a strong complementarity between AI investments and organizational change: firms with better leadership, cross-functional processes, and data practices capture disproportionate benefits, implying increasing returns to scale and potential winner-take-most dynamics.
Authors' theoretical inference from cross-case patterns and economic reasoning; supported qualitatively by cases showing disproportionate gains in better-managed firms.
Firms that can credibly supply explainability and governance may capture a premium—explainability can be a competitive differentiator and a signal of quality and lower regulatory risk.
Conceptual synthesis and market-structure arguments from the reviewed literature; reviewed studies provide theoretical and some qualitative support but not systematic market-price estimates.
Policy should incentivize transparency, auditability, standards for human–AI interfaces, workforce development, certification of teaming practices, and liability frameworks to ensure accountability and equitable outcomes.
Normative recommendation based on ethical and governance considerations synthesized in the paper; not supported by policy evaluation evidence within the paper.
Orchestrating attention and interrogation through interface and workflow design helps manage what humans and AI focus on and how they challenge/verify each other, thereby reducing errors and misuse.
Prescriptive claim grounded in human factors and HCI literature synthesized by the authors; the paper suggests these mechanisms but does not report empirical trials demonstrating effects.
Design principles (define goals/constraints, partition roles, orchestrate attention/interrogation, build knowledge infrastructures, continuous training/evaluation) are necessary design levers to build high-performing, transparent, trustworthy, and equitable Human–AI teams.
Prescriptive synthesis from reviewed literatures and conceptual modeling; these principles are proposed heuristics rather than empirically validated interventions in the paper.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Risk management can accelerate AI adoption by lowering uncertainty for managers and investors, thereby affecting diffusion and productivity gains from AI.
Conceptual implication derived from the review's synthesis and discussion (policy/implication section); not supported by primary empirical testing within the reviewed literature.
Firms that adopt structured risk management for AI projects can reduce model failure, operational losses, and reputational costs—improving risk-adjusted returns on AI investment.
Theoretical and practical extrapolation from general RM frameworks and thematic findings in the literature; no AI-specific primary empirical studies included in the review.