Evidence (7395 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Adoption
Remove filter
The paper's findings deepen the understanding of algorithmic aversion in the context of generative AI and offer practical guidance for creators and platforms navigating transparency versus engagement trade-offs.
Authors' interpretation and conclusions summarized in the abstract, based on the two experiments (study 1: n = 325; study 2: n = 371).
The paper's primary contribution is to combine established ingredients—attention scarcity, free-entry dilution, superstar effects, and preferential attachment—into a unified framework directed at claims about AI-enabled entrepreneurship.
Stated contribution and methodological description in the paper (synthesis and applied formalisation); this is a descriptive/methodological claim rather than an empirical result.
The governance risk-mitigation effects of AI operate through increasing financial risk exposure.
Authors' mechanism tests indicate a relationship between AI adoption and changes in financial risk exposure measures, which they interpret as a channel affecting executive behavior.
Organizational culture and technological readiness moderate the effectiveness of generative AI integration in decision-making processes.
The paper reports moderation effects tested in the SEM framework using survey data from senior managers, decision-makers, and AI adoption specialists (SmartPLS). No numeric moderator effect sizes or sample size provided in the excerpt.
The effects of financial digital intelligence on the innovative development of strategic emerging industries vary across regions and sectors: there are differences across central, eastern, and western regions and across capital‑intensive and technology‑intensive sectors, while no significant impact is noted in other regions and industries.
Heterogeneity analysis reported on the panel dataset (5,731 observations, 2015–2022) examining regional and industry subsamples (details of subgroup sizes and statistical tests not provided in excerpt).
Initiatives such as Cassava AI's network of AI factories signal growing interest in adopting AI in Africa, but these projects remain very targeted and continental adoption still requires better coordination between African stakeholders.
Cited example (Cassava AI) in the paper to illustrate nascent initiatives; combined with the authors' qualitative assessment of scope and geographic targeting of such projects.
Small language models offer privacy-preserving alternatives to frontier models, but their specialization is hindered by fragmented development pipelines that separate tool integration, data generation, and training.
Background claim stated in paper/abstract; no experimental data provided for this statement within the abstract.
Extensive synthetic experiments show that policy regularizations reshape the narrative on what is the best DRL method for inventory management.
Paper states results from extensive synthetic experiments that change which DRL methods are considered best under policy regularization; abstract does not provide the experimental sample size, specific methods, or quantitative comparisons.
Implementation of human-replacing technologies leads to significant transformations in skill demand: it reduces reliance on low-skilled labour while increasing demand for qualified engineers, system operators and specialists in digital technologies.
Sector-specific analysis and review of international labour-market studies cited in the article documenting skill-biased effects of automation and digitalization; qualitative assessment for Ukraine's mining and metallurgical sector under workforce shortage conditions.
Foreign direct investment (FDI) shows an insignificantly positive direct effect on local TFCP but a significantly negative indirect (spillover) effect, attributed to a 'pollution haven' effect.
Spatial Durbin Model estimates for FDI on panel (30 provinces, 2010–2023): direct coefficient positive but not significant; indirect coefficient significantly negative; interpretation given as pollution-haven mechanism.
Industrial intelligence exhibits regional heterogeneity: a significantly negative direct effect in the east, a significantly positive direct effect in the central region, an insignificant direct effect in the west, and positive indirect (spillover) effects in the east and west.
Regional/subsample Spatial Durbin Model analyses dividing the sample into east, central, and west regions (30 provinces, 2010–2023); reported region-specific direct and indirect coefficients and significance levels.
Industrial intelligence has an insignificantly negative direct effect on local TFCP, but its positive spatial spillover effect is significant at the 1% level, producing a significantly positive total effect.
Spatial Durbin Model results for industrial intelligence on panel (30 provinces, 2010–2023): direct coefficient negative and not statistically significant; indirect coefficient positive and significant at 1%; total effect positive and significant.
China's TFCP rose overall from 2010 to 2023 but exhibited a widening regional gap of 'higher in the east, lower in the west'.
Panel data of 30 Chinese provincial-level regions (2010–2023); TFCP measured using an undesirable-output super-efficiency SBM model and summarized temporal and spatial patterns.
The study found a significant transformation of the employment structure under the influence of artificial intelligence.
Empirical analysis using an envelope model ("input" orientation) applied to a sample of European Union countries; the paper reports modeled changes in employment structure attributable to AI diffusion.
For AI: a cohesive professional vocabulary formed rapidly in early 2024, but the practitioner population never cohered.
Empirical finding from analysis of the 8.2M resume dataset showing a rapid increase in the vocabulary-cohesion metric around early 2024 while the population-cohesion metric did not show a corresponding rise.
The framework implies threshold effects in training and capability acquisition: when the teaching horizon lies below the prerequisite depth of the target, additional instruction cannot produce successful completion of teaching; once that depth is reached, completion becomes feasible.
Model-derived threshold result described in the abstract (mathematical analysis of prerequisite depth vs. teaching horizon).
The value of information depends on whether downstream users can absorb and act on it: a signal conveys meaning only to a learner with the structural capacity to decode it (an explanation that clarifies a concept for one user may be indistinguishable from noise to another who lacks the relevant prerequisites).
Conceptual argument motivating the model; theoretical reasoning described in the paper's intro/abstract.
Automation holds significant potential for modernising tax administration, but its success depends on aligning technological innovation with inclusive policy design and institutional capacity.
Overall conclusion of the literature synthesis of 36 peer-reviewed articles; based on patterns of positive impacts conditional on contextual factors and governance highlighted across the studies.
Behavioural responses to automation vary across taxpayer segments: some users embrace automation as a facilitator of compliance while others resist due to perceived opacity and technological anxiety.
Synthesis of behavioural findings from the reviewed literature (36 studies) reporting heterogeneous responses by taxpayer segment, including qualitative reports of resistance and quantitative measures of uptake/adoption.
The effectiveness of automated tax systems is mediated by contingencies including digital literacy, institutional trust, and regulatory clarity.
The review identifies recurring contextual factors across the 36 articles that are reported to moderate or mediate the impact of automation on outcomes (qualitative and quantitative findings cited in the synthesis).
The study identifies the main AI-enabled mechanisms advancing CE principles in smart manufacturing, waste valorisation, supply-chain transparency, and sustainable design.
Bibliometric network analysis of 196 peer-reviewed articles (2023–2024) and systematic review of 104 studies, per the abstract; identification is presented as a product of these analyses.
AI is not an inherent instrument of justice but a malleable socio-technical force whose equitable outcomes depend on policy design and institutional context.
Interpretation and synthesis of empirical results showing conditional and heterogeneous effects of AI; normative conclusion drawn by authors from observed heterogeneity and mediating channels.
Governmental structures, labor supply and demand, and incorporation of financial measures act as key intervening variables affecting achieved ROI from GenAI implementations.
Qualitative synthesis and theoretical analysis reported in the paper identifying contextual/intervening variables.
There is an evident tension between privacy and security in existing AI governance approaches.
Thematic synthesis and co-occurrence network from the reviewed studies identify trade-offs and tensions reported between privacy-preserving approaches and security requirements.
Generative AI serves as an effective 'wingman' for employment lawyers, capable of replacing substantial junior associate work while requiring continued human expertise for client counseling, supervision, and final legal advice preparation.
Authors' synthesis of experimental results showing AI-produced substantive analysis plus discussion about remaining limitations (e.g., citation errors) and required human oversight; qualitative assertion about substitutability for junior associate tasks.
We evaluate 14 LLMs under zero-shot prompting and retrieval-augmented settings and witness a clear performance gap.
Experimental evaluation reported in the paper: authors state they ran experiments on 14 different large language models, under zero-shot and retrieval-augmented configurations, and observed differing performance across models.
Policy implication: smarter, better-coordinated green governance is needed to address the negative local impacts and the crowding-out interaction between AI and environmental regulation.
Policy recommendation drawn in the abstract based on the empirical spatial findings (negative local effects and negative interaction).
Substantial regional gaps persist: leading eastern provinces approach a UCEE value of 1.0 while some northeastern provinces remain below 0.1.
Regional UCEE index estimates from the Super-SBM model across the 30 provinces reported in the abstract.
The systemic implications of AI in finance depend less on model intelligence alone than on how agent architectures are distributed, coupled, and governed across institutions.
Central argumentative claim supported by the AFMM conceptual model and an illustrative empirical application described in the paper (modeling + event-study approach); no full-sample details provided in the excerpt.
The Agentic Financial Market Model (AFMM), a stylised agent-based representation, links agent design parameters (autonomy depth, heterogeneity, execution coupling, infrastructure concentration, supervisory observability) to market-level outcomes including efficiency, liquidity resilience, volatility, and systemic risk.
Presentation of a stylised agent-based model (AFMM) in the paper; conceptual modelling linking specified agent parameters to macro/market outcomes. No empirical sample size reported in the excerpt.
Financial AI agents can be described by a four-layer architecture covering data perception, reasoning engines, strategy generation, and execution with control.
Conceptual framework proposed by the authors (theoretical/architectural proposal); no empirical testing or sample size provided.
These productivity gains are most pronounced for lower-skilled workers, producing a pattern the authors call “skill compression.”
Cross-study pattern reported in the literature review: comparative evidence across worker-skill strata in multiple empirical papers showing larger relative gains for lower-skilled/junior workers; specific underlying studies and sample sizes are not enumerated in the brief.
Financial well-being is not an automatic byproduct of automated credit efficiency but an emergent outcome of architectural alignment among technology, borrower capability, and governance structures.
Theoretical conclusion drawn from empirical results showing mixed effects (positive on repayment and resilience, negative on stress) and significant moderation by human capability and institutional design.
The authors identify ten evaluation practices that teams use, ranging from lightweight interpretive checks to formal organizational processes (examples: qualitative user reviews, red-team testing, A/B experiments, telemetry/log analysis, structured annotation, governance/meta-evaluation).
Thematic coding of 19 interview transcripts produced a taxonomy enumerating ten practices (paper reports the taxonomy as an outcome).
Quantum-driven growth depends critically on adoption rates, infrastructure readiness, complementary investments (digital infrastructure, human capital), and enabling policy/regulatory environments.
Scenario framework that varies (a) technical timelines, (b) sectoral adoption rates (diffusion models), (c) infrastructure readiness, and (d) policy environments; policy counterfactual modeling shows sensitivity of adoption and macro outcomes to these parameters.
The magnitude and timing of macroeconomic impact from quantum computing are highly uncertain.
Monte Carlo / scenario ensemble results showing wide (fat-tailed) outcome distributions driven by uncertainty in technical milestones, adoption rates, and complementarity strengths; use of expert elicitation to parameterize tail risks.
Safeguards such as audit trails, explainability, and human oversight impose additional implementation costs that must be weighed against efficiency benefits.
Normative and economic reasoning based on requirements for compliance and system design; no empirical cost estimates provided.
There is a fundamental tension between AI-driven efficiency and core administrative-law principles—discretion, due process, and accountability.
Doctrinal legal analysis of administrative-law principles in Vietnam and comparative institutional analysis of AI adoption in other systems.
The net educational value of AI-generated feedback depends on alignment with pedagogical goals, quality evaluation, integration with human teaching, and governance to manage equity, privacy, and incentives.
Synthesis statement from the meeting report produced by 50 interdisciplinary scholars; conceptual judgment rather than empirical proof.
LLMs excel at extracting and generating arguments from unstructured text but are opaque and hard to evaluate or trust.
Synthesis of recent LLM literature and observed properties (generation capability vs. opacity); no empirical evaluation within this paper.
The paper is primarily theoretical and historical; empirical validation is needed to quantify the irreducible component of LLM value, and practical degrees of rule‑extractability may exist even if some capabilities remain tacit.
Stated limitations section acknowledging the theoretical nature of the work and the need for empirical follow‑up.
If an LLM's full capability were reducible to an explicit rule set, that rule set would be an expert system; because expert systems are empirically and historically weaker than LLMs, this leads to a contradiction (supporting non‑rule‑encodability).
Logical proof‑by‑contradiction presented in the paper, supported by conceptual mapping between rule sets and expert systems and qualitative historical comparisons.
HindSight has limitations: it depends on citation and venue proxies for impact, uses a finite forward window (30 months), and may undercount delayed-impact research and be domain-specific to AI/ML.
Authors' stated limitations in the paper noting reliance on observable downstream signals (citations/venues), the finite forward window, field heterogeneity, and measurement noise.
Practical caveats: benefits depend on accelerators supporting MXFP formats; despite up to 96% recovery, residual quality gaps may remain for some task-specific or safety-critical cases; integration and tuning cost is required to apply BATQuant.
Discussion/limitation section in the paper outlining hardware dependency, remaining quality gaps despite high recovery percentages, and engineering effort for integration and tuning; these are argumentative caveats rather than results of controlled experiments.
The sign of the Largest Lyapunov Exponent (LLE) gives a precise criterion: negative LLE (contracting dynamics) permits fast convergence and real speedups for parallel Newton methods, whereas positive LLE (expanding/chaotic dynamics) prevents generally achieving fast convergence.
Theoretical derivation relating Lyapunov exponents to the stability of parallel-in-time linearizations and convergence of the parallel Newton iterations; supported by empirical observations reported on representative tasks.
Many fixed-point and iterative schemes (e.g., Picard, Jacobi) are unified as special cases within the parallel Newton framework.
Theoretical analysis and derivations in the thesis that show these classical iterative methods arise from particular choices/approximations in the parallel Newton formulation.
The core problem is a trade-off between computational latency/resource cost and decision correctness: invoking more LLM reasoning improves correctness but increases latency; invoking less reduces latency but can increase failures.
Paper frames the research problem explicitly as this trade-off in the Introduction/Problem framing sections and motivates the need for adaptive orchestration.
Demand for labor will shift toward data scientists, ML engineers, and interdisciplinary scientists, while wet-lab expertise and translational teams remain crucial.
Workforce trend analysis and employer hiring patterns summarized in the paper; interviews/case studies indicating changes in team composition.
AI excels at hypothesis generation but cannot replace scientific reasoning and experimental validation; human expertise remains essential.
Argument and case examples in the paper showing AI-generated hypotheses requiring human-led experimental design, interpretation, and validation.
Net gains from AI are not automatic nor evenly distributed; benefits depend on translation rates to clinical success and on addressing non-technical enablers.
Synthesis and conditional argument informed by sector observations; not backed by empirical distributional analysis in the paper.