Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
Compliance and reporting requirements will impose additional costs on firms, with small providers likely disproportionately affected unless rules are proportionate.
Policy analysis of compliance and transaction costs (qualitative assessment of administrative burden and scale effects).
The facility-level focus and training-phase emphasis of current governance limit regulators' ability to monitor and mitigate the full environmental externalities of modern AI systems.
Synthesis of empirical findings on model/inference impacts combined with regulatory mapping showing gaps between impact locus and regulatory reach.
Transparency about AI environmental impacts has declined even as deployments of generative models have accelerated, creating an information gap for regulators, users, and researchers.
Trend observations from collated operational datasets and cited empirical studies indicating reduced disclosure by providers alongside increased deployments; supported by regulatory mapping noting scant AI-specific reporting outside the EU.
The larger cumulative environmental impacts of these generative models are primarily driven by inference-phase (online serving) energy consumption rather than training-phase emissions.
Evidence synthesis and operational data analysis focusing on deployment/inference patterns and relative contribution of lifecycle phases in examined models.
Generative web-search and reasoning AI models deployed widely in 2025 impose substantially higher cumulative environmental costs than earlier AI generations, largely driven by inference at scale.
Evidence synthesis: collation of empirical studies and operational data comparing energy and emissions profiles of 2025-era model families and deployment patterns (paper-wide comparative accounting).
Differences in access to AI tools and digital infrastructure could exacerbate global and within-country inequalities in research capacity and outputs.
Statement in Distributional and Competitive Effects. Motivated by observed heterogeneity in infrastructure and access; abstract does not provide empirical heterogeneity estimates or samples.
Institutions that adopt and integrate AI effectively may gain disproportionate advantages, increasing stratification in academic prestige and funding.
Presented as a distributional/competitive implication. Based on theory and possibly institutional case studies; no causal evidence or quantitative estimates provided in the abstract.
Conversely, lack of standards or failed validation can create regulatory setbacks, reputational risk, and stranded R&D spending.
Case reports and regulatory analysis in the narrative review describing negative outcomes from failed validation or non-aligned AI tools (qualitative evidence).
There is substitution risk: routine ideation and drafting tasks may be automated, altering task-level labor demand and wage structure.
Task-automation literature and empirical studies of LLMs performing routine drafting/ideation tasks summarized in the review; no long-run labor-market causality established in the paper.
Generative AI lacks reliable situational judgment on ambiguous problems and on ethical trade-offs, making it insufficient for autonomous decision-making in such contexts.
Case examples and experimental studies cited in the synthesis showing inconsistent or inappropriate responses to ambiguous/ethical scenarios; no large-scale causal evidence provided.
LLMs are prone to bias, mediocrity, and factual or logical errors when domain-specific context or experiential knowledge is absent.
Review of empirical evaluations documenting biased outputs, superficial or mediocre suggestions, and factual errors in open-ended tasks and domain-specific prompts; evidence comes from multiple short-term studies and applied examples.
LLMs are predominantly recombinative — they tend to rework and recombine existing material rather than produce deeply novel insights.
Analytical synthesis of output analyses and creativity assessments from multiple empirical studies demonstrating frequent recombination of existing concepts and lower rates of highly original novelty; studies and measures vary.
Proliferation of low-quality or biased AI-generated ideas creates externalities: increased filtering and reputational costs for firms and risks of poor product designs, ethical lapses, or regulatory violations if evaluation is insufficient.
Case studies and qualitative reports documenting filtering burdens and instances of biased/misleading outputs; theoretical reasoning about reputational and regulatory risks; direct quantification of these externalities is limited.
Standard productivity metrics (e.g., TFP) may undercount the value of ideation and creative augmentation provided by generative AI, making attribution between human and AI contributions difficult.
Methodological discussion in the review supported by heterogeneity in outcome measures across studies and challenges in measuring implemented idea quality and long-run impacts.
Generative models exhibit recombination bias: they tend to remix existing patterns rather than produce deeply original, paradigm-shifting insights.
Synthesis of output analyses across studies showing frequent recombination of known patterns and limited evidence of wholly novel, paradigm-changing ideas; claim based on qualitative and comparative analyses in reviewed literature.
Biased training data or objective functions in AI models could perpetuate gender disparities by offering different products or risk scores to men and women.
Review of AI fairness literature and examples of algorithmic disparate impacts summarized in the paper (conceptual and case evidence; not an empirical test tied specifically to fintech products in the review).
Secure infrastructure (including SECaaS-provided tools) affects the availability and trustworthiness of AI training data and models; breaches reduce returns to AI R&D via direct losses and reduced trust.
Conceptual linkage supported by case studies of data/model theft and technical literature on secure enclaves, differential privacy, federated learning; no broad quantitative estimate provided.
Security externalities (one firm's breach raising ecosystem risk) complicate private incentives and may justify policy interventions such as standards or mandatory reporting.
Economic theory on externalities, case studies showing spillovers from breaches, and policy analyses recommending interventions.
Concentration among large cloud/SECaaS providers can create market power, platform dependency, and affect competition in AI markets.
Market-structure theory, observed concentration patterns in industry reports, and qualitative case studies; no causal estimates provided in the chapter.
Latency and integration frictions can limit the suitability of SECaaS for specialized workloads, including some AI pipelines.
Technical evaluations and benchmarks that measure latency/resource overhead; reports and case studies noting integration challenges for high-throughput or low-latency workloads.
Reliance on a small set of major cloud/SECaaS providers creates vendor lock-in, concentration risk, and systemic vulnerability if a major provider is compromised.
Market-structure discussions, observed provider outages and incidents (case studies), and theoretical arguments about concentration; no single causally identified empirical estimate provided.
Divergent governance regimes increase the risk of data localization, interoperability frictions, and regulatory fragmentation — raising costs for multinational AI development and limiting global model generalizability.
Policy‑level comparative inference from contrasting national approaches identified in the document analysis and related literature on cross‑border data governance; no direct measurement of costs or model generalizability in the paper.
State‑led coordination can rapidly mobilize resources and scale national champions, altering competitive dynamics and potentially creating winner‑take‑most outcomes.
Theoretical inference from document evidence of state mobilization and developmentalist goals in Chinese texts, combined with literature on state coordination and industrial scaling (no empirical competition measures in the paper).
Holding schools liable under federal civil‑rights statutes is sometimes possible but often insufficient to prevent or remediate harms caused by EdTech products.
Policy argumentation and doctrinal analysis with hypotheticals and illustrative cases demonstrating enforcement limitations when only schools are targeted (no empirical prevalence data).
Stronger negative sentiment (measured by aggregated VADER scores of complaint narratives) is significantly associated with near-term stock price declines.
VADER sentiment applied to individual complaint narratives then aggregated to firm–month sentiment; fixed-effects panel models find statistically significant negative relationships between more negative aggregated VADER scores and subsequent abnormal returns across the 261-firm monthly sample (2018–2023).
Multipolar competition in AI increases risks of fragmented regulations, export control cascades, and inefficient duplication of standards, producing large economic coordination and collective‑action costs.
Theoretical argument and literature synthesis on international political economy of standards and controls; no novel quantitative cost estimates, though the paper recommends empirical research avenues to quantify these costs.
AI‑driven information operations, recommendation systems, and content economies alter market incentives, advertising revenues, and the political economy of attention—creating externalities not priced in markets.
Interpretive synthesis of literature on digital platforms, misinformation, and attention economics; supported by cited secondary studies and policy examples rather than new empirical measurement.
Competition over AI standards, data governance norms, and platform rules is an economic contest with long‑run market structure implications (network effects, winner‑take‑most outcomes).
Theoretical synthesis drawing on platform economics and standards literature; supported by qualitative examples of standard‑setting contests but without new quantitative market structure analysis.
Export controls, sanctions, investment screening, and tech diplomacy function as economic levers of smart power and reshape global AI supply chains, FDI flows, and comparative advantage.
Policy‑focused evidence and examples cited in the literature review and case studies; proposed policy event‑study approaches are suggested but no original empirical event study is presented.
The digital/AI era changes both the tools (new technological instruments of influence) and the targets (information environments, data infrastructures), creating novel governance and collective‑action problems.
Conceptual analysis supported by literature synthesis on digital platforms, AI, surveillance, and information operations; illustrative examples from policy and secondary studies rather than original empirical measurement.
Framing policy as 'Digital Sovereignty' supports data‑localization and stronger cross‑border constraints, which will affect multinational fintechs and cross‑border credit/data services.
Policy-framing and international governance analysis in the compendium; inference about cross‑border regulatory impacts rather than measured effects.
Mandatory white‑box transparency and audit requirements are likely to favor firms that can afford compliance (larger incumbents and certified auditors), potentially raising barriers to entry for small fintechs unless mitigated by proportional rules or sandboxes.
Economic inference and market-structure analysis presented in the "Market structure & competition" section; no empirical panel or field data (theoretical reasoning).
Poorly calibrated rules may unintentionally restrict product offerings or increase costs for low‑income borrowers if compliance expenses are passed through.
Risk analysis and economic reasoning in the compendium; projection based on standard pass‑through and market equilibrium logic (no empirical measurement provided).
Recognition of digital sovereignty and data‑localization pressures can fragment data flows, increasing costs for cross‑border model training and lowering scale economies that benefit high‑quality AI.
Policy and economic analysis in the compendium drawing on comparative examples and theory about data localization and scale economies; no empirical cost accounting provided.
Replacing opaque predictive features with interpretable substitutes could reduce predictive accuracy in some models, creating trade‑offs between fairness/transparency and short‑term efficiency.
Synthesis of technical AI governance literature and normative design discussion in the compendium; no new experimental validation reported.
Mandatory white‑box requirements and audits will raise compliance costs, which can increase barriers to entry for smaller fintechs and favor incumbents unless mitigated by supporting measures.
Economic reasoning and policy analysis in the AI economics section; theoretical projection based on compliance cost effects (no empirical trial reported).
Distributed training introduces novel incentive issues (free-riding, poisoning incentives, misreporting of local metrics) that require contractual and cryptographic solutions and may create demand for trusted intermediaries or certification markets.
Mechanism/incentive analysis within the paper; threat modeling and proposed governance solutions. No experimental evaluation of incentive mechanisms or market responses.
Federated infrastructures redistribute informational power — moving custody away from centralized platforms reduces their exclusive access to behavioral data and can lower their data-based market power.
Economic and institutional analysis (conceptual), discussion of informational rents and bargaining positions. This is a theoretical economic claim without empirical market measurement in the paper.
Fairness constraints (e.g., disparate ad delivery) and monitoring become more challenging to enforce and audit without centralized raw data, requiring new governance and measurement mechanisms.
Policy and governance analysis describing limitations of decentralized data for fairness monitoring; proposed policy-aware governance layer and attestation/audit mechanisms. No empirical validation of governance effectiveness provided.
DPPs raise privacy and surveillance risks if personal data are linked to product use; economic regulation should incentivize privacy-preserving analytics (e.g., federated learning, differential privacy) and data minimality to maintain trust.
Risk assessment and governance recommendation grounded in stakeholder concerns and standard privacy literature; not empirically measured in the surveys.
Automated benchmarks dominate the evaluation of large language models, yet no systematic study has compared user satisfaction, adoption motivations, and frustrations across competing platforms using a consistent instrument.
Statement of the paper's motivation/background; implied literature review and identification of an empirical gap (no systematic, cross-platform user survey reported prior).
Mobile penetration reaches 84% (in the context of low-income countries), a statistic used to motivate RSI's potential reach.
Single numeric statistic reported in the paper as background context; source or empirical basis for the statistic not provided within the supplied text.
International shipping produces approximately 3% of global greenhouse gas emissions.
Contextual statement in the paper citing external estimates (specific source not provided in the excerpt).
The risk of endogeneity was avoided by using an instrumental approach to obtain causal estimates of the impact of technological diffusion on market opportunities.
Paper reports use of an instrumental variables approach to address endogeneity (instruments and diagnostics not described in the excerpt).
A Sankey diagram of thematic evolution shows lexical convergence over time and indicates that a small set of authors has disproportionate influence in structuring the discourse.
Thematic evolution analysis visualized with a Sankey diagram; author influence inferred from performance trends (citations/publication counts) in the bibliometric data.
CID does not significantly mediate the relationship between SCD and strategic green innovation.
Mediation tests showing that while CID is related to substantive innovation, the indirect effect via CID on strategic green innovation was statistically insignificant.
There is a need to develop new trade statistics that capture AI‑enabled services and platform‑mediated cross‑border transactions.
Methodological gap identified across reviewed literature and statistical analyses; recommendation based on descriptive assessment (no development of such statistics in the paper).
Manipulating costs and benefits of observation versus action in experiments can probe the switching behavior driven by System M.
Proposed experimental manipulation; no empirical data presented.
Ablation studies disabling System M or decoupling Systems A and B will help test whether meta-control provides empirical benefits.
Suggested experimental design (ablation study) in the methods section; no results provided.
The authors will publicly release the benchmark, code, and pre-trained models.
Statement in the paper (release/availability section) announcing plans to publish benchmark, code, and pre-trained models.