Evidence (3492 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Innovation
Remove filter
Voyage routing remains dominated by heuristic methods.
Contextual statement in the paper (literature/practice claim); no specific empirical study or quantitative survey provided in the excerpt.
Mergers are a barrier to economic growth (negative association between mergers and GDP growth).
Model results reported a negative relationship between mergers and GDP growth in the regressions described in the summary; however, the summary does not define how 'mergers' is measured, how widely it was observed across countries, or the statistical significance levels.
Without effective safeguards, the digital world can shift from a space of opportunity to one of harm.
Normative/conditional claim drawing on the book's analysis; not an empirical finding—no method or sample size applicable in the excerpt.
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
Job insecurity rises when FDI is short‑term, footloose, or concentrated in capital‑intensive extractive projects.
Conceptual arguments and empirical examples in the review linking investment temporariness and capital intensity to higher job instability; empirical evidence less comprehensive and context-specific.
Economic rents and advantages may accrue to agents who control large datasets, computing resources, and organizational processes that effectively integrate AI as a co-pilot, potentially increasing market concentration among AI providers.
Economic theory on scale economies and platform effects combined with observed industry patterns; reviewed literature provides conceptual arguments and case examples rather than broad empirical market-structure measurement.
Generative AI poses substitution risk for entry-level or routine cognitive work focused on generation or drafting without evaluative responsibility.
Task-based analyses and case studies indicating automation potential for routine generation tasks; empirical demonstrations of AI-produced drafts/outputs that could replace such work, but longer-run displacement evidence is limited.
Recommendation algorithms and widespread automated advice can induce herding or increase common exposures across retail investor portfolios, with potential macroprudential implications.
Theoretical discussion supported by examples from retail trading episodes and algorithmic amplification literature referenced in the review (conceptual and anecdotal evidence; limited systematic empirical quantification).
There are risks that concentration of modeling capability around well-funded actors could create inequality in capture of downstream economic gains despite open data.
Risk analysis in the discussion section; argued qualitatively without empirical testing in the paper.
Higher compliance and liability costs may be passed to districts, potentially affecting the affordability of EdTech for underfunded schools unless federal guidance or subsidies offset costs — a distributional concern.
Economic distributional reasoning (theoretical), not supported by empirical pricing or budget impact data in the Article.
Standardized, high-quality data will concentrate competition on modeling, compute, and algorithmic innovation, favoring actors with greater compute resources.
Economic argument presented in the discussion; not evaluated with empirical market data in the paper.
This research is one of the first large-scale quantitative studies to empirically validate the mediating pathways through which GenAI influences business performance in the UK market.
Positioning/originality claim in the paper's literature review and contribution statement asserting relative novelty and sample size (n = 312) compared to prior studies.
Signal legitimacy was validated through negative control experiments.
Experimentation claim: the paper asserts that negative control experiments were run to validate that signals are not due to memorized ticker associations. The excerpt does not specify the design, number, or results of these negative controls.
The PIER architecture (physics-informed state construction, demonstration-augmented offline data, decoupled post‑hoc safety shield) transfers to wildfire evacuation, aircraft trajectory optimization, and autonomous navigation in unmapped terrain.
Claim of transferability stated in the paper; the excerpt does not include experimental details or quantitative results for these domains.
Hybrid agency implies complementarity between GenAI and managerial/knowledge‑worker skills (curation, evaluation, coordination), potentially increasing returns to those skills while automating routine cognitive tasks—consistent with skill‑biased technological change.
Synthesis of recurring themes linking GenAI capabilities with managerial skill topics in the thematic clusters; positioned as an implication for labour demand and skill composition rather than an empirically tested effect.
Policy prescriptions for developing countries to mitigate these vulnerabilities include: diversify supply sources, invest in local human capital and mid-stream capabilities, create legal/regulatory flexibility to navigate competing standards, and pursue regional cooperation to build bargaining leverage.
Policy analysis and recommendations grounded in the mechanisms identified via process tracing and comparative cases; intended as prescriptive synthesis rather than empirically demonstrated interventions in the paper. (Based on inferred best-practice interventions; no empirical evaluation/sample size provided.)
Public investments in standards, verification infrastructure, and public-interest datasets can correct market failures and support trustworthy AI.
Policy recommendation informed by governance and public-good theory and examples from the literature; the claim is prescriptive and not validated by new empirical evidence within the paper.
By lowering single-GPU resource requirements and improving throughput, SlideFormer can democratize domain adaptation and fine-tuning of large models on commodity single-GPU hardware (reducing the need for multi-GPU clusters).
Argumentative implication based on reported throughput, memory, and capacity improvements (e.g., enabling 123B+ models on a single RTX 4090 and reducing memory usage). This is an extrapolation from experimental results rather than a directly measured socio-economic outcome.
Collaborative VR features can change team workflows (remote, synchronous inspection sessions), potentially lowering coordination costs across geographically distributed teams.
Paper lists collaborative multi-user sessions as a planned capability and posits organizational effects; no user studies or measurements of coordination cost savings presented.
Public funding for shared VR-capable data-exploration infrastructure could yield high leverage by improving returns on large observational investments.
Policy recommendation deriving from the platform and ROI arguments in the paper; no cost-benefit analysis or quantified ROI provided.
Using iDaVIE increases the usable fraction of large observational datasets by improving QC and annotation throughput, thereby raising returns to telescope investments and downstream AI efforts.
This is an inferred implication in the paper (returns-to-scale/platform effects) based on improved QC/annotation throughput; no empirical measurement of usable-fraction increases provided.
Higher-quality labels produced via immersive inspection can reduce label noise and lower required training-data sizes for a target ML performance level.
Paper presents this as an implication/expected outcome based on improved annotation quality from immersive inspection; no empirical ML training experiments or quantitative reductions reported.
iDaVIE demonstrably reduces cognitive load for multidimensional-data tasks compared with 2D-slice inspection.
Paper asserts reduced cognitive load and faster, more intuitive exploration as an aim and reported outcome; no formal user-study metrics, sample size, or statistical analysis provided.
The methodological template (train an ML surrogate of a costly simulator and embed it in an optimizer) generalizes beyond Doherty power amplifiers to other analog/microwave components and broader engineering domains.
Paper proposes generality of approach in implications section; no experimental demonstrations beyond the Doherty PA case are provided in the summary.
Design choices and open-weight availability are intended to align with EU AI Act expectations for regional sovereignty and compliance.
Stated intent in the paper: the authors explicitly frame design and release strategy as aiming to align with EU AI Act regulatory expectations. The summary notes this intention but provides no technical compliance proof or audits.
EngGPT2 requires substantially less inference compute than comparable dense models—reported as roughly 20%–50% of the inference compute used by dense 8B–16B models.
Paper reports relative inference compute reductions (1/5–1/2). The summary states these percentages but no supporting FLOP counts, latency measurements, hardware, batching conditions, or benchmark-query workloads are provided.
Embedding culturally aligned moderation and multi-layer safety orchestration can reduce regulatory frictions and increase adoption in conservative or tightly regulated markets.
Paper claims regulatory and safety economics implications from their safety/moderation architecture; this is an asserted implication rather than an empirically validated outcome in the summary.
The methods used (data quality focus, continual pre-training, model merging, modular product stacks) are potentially transferable to other underrepresented/low-resource languages, lowering barriers to regional AI competitiveness.
Paper posits this policy/transferability implication as an argument in the 'Implications for AI Economics' section; no cross-language experimental evidence provided in the summary.
Fanar 2.0 demonstrates that targeted data curation, continual pre-training, and model-merging can be a viable alternative to the raw-scale pre-training arms race for language-specific competitiveness.
Paper argues this implication based on achieving benchmark gains on Arabic and English using curated data (120B tokens), continual pre-training, model-merging, and a 256 H100 GPU training budget rather than massively larger-scale pre-training.
Oryx provides Arabic-aware image/video understanding and culturally grounded image generation.
Paper identifies Oryx as the vision component with Arabic-aware understanding and culturally grounded generation; no benchmark metrics are provided in the summary.
Exchanging generative modules (rather than raw data) and enabling modular unlearning improves auditability and aligns better with privacy/regulatory compliance than raw-data sharing.
Argument in the paper that module exchange and deterministic module deletion are more compatible with data sovereignty and regulatory requirements; no formal legal validation or compliance testing reported in the summary.
FederatedFactory enables new economic opportunities (module marketplaces, synthetic-data services) and affects incentives by shifting value toward modular generative assets and orchestration rather than raw centralized datasets.
Conceptual and economic discussion in the paper about potential implications; not based on empirical market data—presented as analysis and hypotheses about economic impact.
The single-round exchange decreases communication rounds and associated coordination/network costs compared to typical iterative federated learning.
Protocol design: single exchange of generative modules vs. typical multi-round weight-aggregation loops in standard FL; paper argues reduced networking/coordination cost. (No quantitative network-cost measurements provided in the summary.)
Public data sharing, reproducibility standards, and shared benchmarks could raise the floor of AI utility across the industry.
Policy implication grounded in arguments about data quality, coverage, and generalizability from the narrative review; speculative recommendation rather than evidence-backed empirical claim.
There is potential for consolidation as firms acquire data, talent, or validated AI-driven assets.
Industry-structure implication drawn from economics of complementary assets and observed M&A activity patterns; presented as a likely trend rather than demonstrated empirically in the paper.
AI startups that demonstrate validated, reproducible wet-lab outcomes and access to high-quality data are more likely to command premium valuations.
Argument from observed market behavior and economics of complementary assets presented in the narrative; no systematic valuation analysis included.
Investors should recalibrate expectations: greater value accrues to firms that integrate AI with experimental pipelines and proprietary data assets rather than firms that only possess AI capability.
Economics-focused implications drawn from thematic analysis of heterogeneity in firm outcomes and integration requirements; market-practice inference rather than empirical valuation study.
AI tools complement sensory expertise and design thinking, shifting skill demand toward interdisciplinary competencies (e.g., computational rheology, psychophysics, cultural analytics).
Reasoned inference from technology literature and skill-complementarity theory; literature synthesis but no labor-market empirical analysis provided.
The paper provides a Differentiated Path reference for Emerging Economies to cope with Technological Nationalism.
Claim about the paper's contribution; based on authors' proposed policy framework and recommendations derived from literature review and theoretical analysis; not empirically validated for emerging economies in the excerpt.
The reduction of the AI Model Performance Gap between China and the United States to single digits highlights the new trend of Technology Competition.
Empirical/observational claim stated in the paper; no information in the excerpt about the benchmark metric used for model performance, measurement methodology, time frame, or data sources; 'single digits' not numerically specified.
Supportive regulatory frameworks and digital infrastructure development are important for leveraging AI technologies to improve global trade efficiency.
Study recommendation derived from empirical findings and discussion; this is a policy implication rather than a directly tested empirical claim (no policy evaluation data provided in the summary).
The study provides empirical support for digital transformation theories within financial intermediation.
Authors interpret quantitative results as empirical evidence consistent with digital transformation theories; specific theoretical tests, model fit statistics, and sample information are not included in the summary.
AI-enhanced compliance systems increased regulatory transparency.
Study reports improvements in regulatory transparency as part of operational efficiency gains attributed to AI-driven compliance systems in the quantitative analysis; precise transparency metrics and sample details not provided.
AI has increased the accuracy of patient selection to 80–90%.
Stated performance range for AI-enabled patient selection in the review. The excerpt does not specify the datasets, evaluation metrics (e.g., accuracy vs. AUC), clinical contexts, or sample sizes used to obtain these numbers.
AI-driven ESG analytics strengthened the financial relevance of sustainability integration and supported better-informed investment decision-making.
Study conclusion synthesizing empirical findings (portfolio outperformance and regression results). This is a normative/concluding statement rather than a directly measured outcome; the summary does not quantify decision-making improvements or measure investor behavior.
AI improved the informational efficiency of ESG assessment by capturing more accurate, forward-looking sustainability risks and opportunities.
Interpretation based on the study's empirical portfolio and regression results (better returns, risk metrics, and stronger associations). The claim is inferential; the summary does not report a direct, separate test of 'informational efficiency' or measures of forecast accuracy.
The study's implications include policy recommendations to foster responsible AI adoption and data utilization to mitigate economic risks.
Authors extend findings to policy recommendations in the discussion/conclusion of the paper (no specific policy proposals or evaluative evidence provided in the summary).
The research produced a practical framework to guide businesses in effectively leveraging AI and Big Data to navigate market volatility.
The paper's culmination is described as a practical framework derived from its mixed-methods findings (the summary does not provide the framework's components or empirical validation).