Evidence (11633 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
The influence of R&D expenditure on value added varies across sectors.
R&D expenditure included as a core explanatory variable in panel MMQR estimations; authors report differing coefficient sizes/signs across sectors/quantiles.
An Evolutionary Game Theory (EGT) framework produces a 'Red Queen' co-evolutionary dynamic between platforms' algorithmic control and worker behavior in which neither side reaches a stable static equilibrium.
Analytical EGT model and numerical simulations of a population-level game between workers (choices: compliance vs. algorithmic gaming) and a platform varying surveillance strictness; model-based result (no empirical sample size).
Policy enforcement maintains a 52.8% success rate for legitimate requests.
Quantitative result reported from the paper's experiments (52.8% success rate for legitimate requests under policy enforcement).
These AI capability improvements would impact the economy and labor market as organizations adopt AI, which could have a substantially longer timeline.
Theoretical implication/interpretation by the authors (economic and labor market impact contingent on organizational adoption; timeline longer than capability improvements).
AI automation is a continuum between (i) crashing waves where AI capabilities surge abruptly over small sets of tasks, and (ii) rising tides where the increase in AI capabilities is more continuous and broad-based.
Conceptual framing proposed by the authors (theoretical proposition).
The inequality-reducing impact of AI is weaker when carbon inequality is measured by the Theil index, implying persistent structural divides between advanced and less developed regions.
Same provincial panel dataset (2003–2021) with the Theil index as the dependent variable; results show a weaker (and impliedly less robust) association between AI development and Theil-measured carbon inequality.
This paper proposes three archetypal AI technology types: AI for effort reduction, AI to increase observability, and mechanism-level incentive change AI.
Conceptual taxonomy introduced by the authors (theoretical classification presented in the paper).
The results generalize to other technologies that feature safety externalities and first-mover advantages.
Authors' argument and model generalization: the mechanisms identified (preemption, externality, policy responses) are argued to apply beyond frontier AI to other technologies with similar strategic features.
Pigouvian safety taxes partially correct the safety externality but cannot eliminate the preemption distortion on their own.
Model policy counterfactuals: introducing a tax on unsafe releases reduces the externality-driven distortion but leaves residual preemption incentives so the first-best is not fully attained by tax alone.
Residual within-task group dynamics dominate the magnitude of the gender wage gap, though task-based employment and wage channels are important for timing and direction of changes in gender inequality in the formal sector.
Decomposition analysis partitioning the gender wage gap into within-task residuals and task-based employment and wage components, with residuals accounting for the largest share of the gap but task channels explaining temporal shifts.
The analysis focuses on formal wage workers in Indonesia from 2001 to 2019.
Stated sample and timeframe in the study description; analyses use data on formal wage workers in Indonesia covering 2001–2019.
AI-driven conversational coaching is increasingly used to support workplace negotiation, yet prior work assumes uniform effectiveness across users.
Background claim in paper indicating prior literature trends and assumptions (stated in introduction/motivation).
Participants were clustered into three profiles -- resilient, overcontrolled, and undercontrolled -- based on the Big-Five personality traits and ARC typology.
Paper reports clustering analysis on participants using Big-Five trait measures and ARC typology; clustering result described as three profiles. Total sample reported as N=267.
We conducted a between-subjects experiment (N=267) comparing theory-driven AI (Trucey), general-purpose AI (Control-AI), and a traditional negotiation handbook (Control-NoAI).
Stated experimental design in paper: between-subjects randomized comparison across three conditions with total sample N=267.
We provide empirical evidence for the inverse parametric knowledge effect: ontological grounding value is inversely proportional to LLM training data coverage of the domain.
Empirical claim based on the controlled experiment (pattern linking grounding value to parametric knowledge coverage reported in paper).
AI adoption is positively associated with exports to all destination regions examined except China (multivariate probit model that accounts for correlated errors across destination-specific export decisions).
Multivariate probit model of destination-specific export decisions (model accounts for correlation among error terms); result indicates significant associations for AI with exports to all regions except China (sample size not reported in prompt).
These findings carry implications for workforce transition policy, regional economic planning, and the temporal dynamics of labor market adjustment.
Paper's discussion/interpretation of modeled ATE results and their policy/economic implications; no empirical test provided for policy outcomes.
AI is reshaping entrepreneurship by enhancing innovation, streamlining operations, and creating new business opportunities, but its impact varies across levels of financial development and economic contexts.
Introductory/motivating statement in the abstract; supported by the cross-country panel analysis (23 countries, 2002–2023) reported in the paper.
Practitioners see the socio-emotional gap not as AI's failure to exhibit SEI traits, but as a functional gap in collaborative capabilities.
Reported interpretation from interview data (10 practitioners) indicating practitioners framed the gap functionally rather than as missing emotional traits.
AI technologies and digital platforms have fundamentally altered the organization of work and modes of value realization.
Synthesis of contemporary literature and theoretical analysis in a conceptual study (no empirical sample reported).
Big Data-based FinTech can contribute to financial stability only when its implementation is strategically justified, ethically grounded and supported by effective regulation, robust data governance and investment in human capital.
Normative conclusion drawn from systemic and structural analysis of literature and synthesis of empirical studies; no empirical test provided within the paper.
The effectiveness of Big Data solutions varies across the financial sphere and depends critically on data quality, regulatory alignment and organisational readiness.
Derived from comparative analysis of sector-specific applications and synthesis of findings in the reviewed literature; no quantified cross-sector sample reported.
AI intensity and employment elasticity are linked by a U-shaped relationship.
Result reported by the paper based on the authors' empirical/econometric analysis of international datasets (OECD/ILO/World Bank).
The paper analyzes AI as a continuous process using data from the OECD, ILO, and the World Bank to study job displacement, creation, and reallocation.
Empirical analysis described in the paper using datasets from OECD, ILO, and World Bank; econometric approach implied.
AI is recognized as a primary change agent that influences various aspects of economies the world over, and thus it profoundly changes not only the number of jobs but also their quality.
Stated as a high-level conclusion in the paper's introduction/abstract; based on literature synthesis of studies from 2013-2025 and references to international sources (OECD, ILO, World Bank).
AI plays a dual role by enhancing productivity while intensifying energy use in the short run.
Synthesis of empirical findings in the paper: documented short-run increase in electricity growth (energy use) following AI adoption alongside statements/evidence that AI enhances productivity (exact productivity measures and estimates not provided in the summary).
Perceived algorithmic influence varies across users and moderates how personalization translates into opinion outcomes.
Survey measures of perceived algorithmic influence combined with moderation tests (interaction terms) in regression-style analyses on the N = 450 sample; authors report heterogeneity in perceived algorithmic impact and moderation of the selective exposure–polarization association.
Leader emotional intelligence (EI) moderates decision quality, delegation, and managerial communication when generative AI tools (Copilot/ChatGPT) are used in corporate management.
Theoretical EI-moderated human–AI model described in the paper and proposal to test it using a randomized online experiment.
Network externalities create an opportunity for win-win industrial policies, but the realisation of such mutually beneficial outcomes depends on market structure (product differentiation/substitutability) and the nature of innovation (product vs process).
Synthesis of model results across parameter regimes in the two-country strategic trade and R&D model showing conditional win-win equilibria; theoretical arguments (no empirical sample).
The welfare consequences of an industrial policy targeting a sector with network externalities are determined by the interaction between the strength of the externality, the type of R&D, and the degree of product differentiation between the home and the imported goods.
Analytical results from a two-country theoretical model with strategic trade and R&D investment; comparative-static analysis of equilibrium outcomes (no empirical sample).
The four-variable account (produced output, underlying understanding, calibration accuracy, self-assessed ability) better explains phenomena like overconfidence, over- and under-reliance on AI, 'crutch' effects, and weak transfer than the simpler claim that generative AI merely amplifies the Dunning–Kruger effect.
Argumentative synthesis in the paper comparing explanatory power of the proposed four-variable framework against the more general Dunning–Kruger metaphor; draws on examples and empirical patterns from the reviewed literature rather than a single empirical test.
A useful working model is 'AI-mediated metacognitive decoupling': LLM use widens the gap among produced output, underlying understanding, calibration accuracy, and self-assessed ability.
Conceptual synthesis and theoretical proposal grounded in reviewed empirical findings from multiple literatures (human–AI interaction, learning research, model evaluation); presented as the paper's working model rather than as a single empirical estimate.
All models exhibit task-dependent confabulation: they perform well on standardized legislative templates (e.g., EU directive transpositions) but generate plausible yet unfounded reasoning for politically idiosyncratic proposals.
Qualitative and quantitative analysis across the 15 proposals showing high-fidelity outputs for standardized/template-like proposals and instances of fabricated or unsupported rationale for idiosyncratic proposals; based on model outputs compared to official explanatory memoranda using the dual evaluation framework.
There is a fundamental trade-off between operational stability and theoretical deliberation across multi-agent coordination frameworks.
Empirical results from controlled benchmarks comparing agent architectures under fixed computational time budgets, as reported in the paper (no numeric sample size or statistical details provided in the abstract).
As technological progress devalues labor, the welfare benefits of steering are at first increased but, beyond a critical threshold, decline and optimal policy shifts toward greater redistribution.
Theoretical model extension analyzing planner's optimal choice as labor's economic value changes; the paper states a non-monotonic relationship with a critical threshold.
Using pre-existing exposure as an instrument for ChatGPT adoption in a long-difference IV design, ChatGPT adoption causes households to spend more time on digital leisure activities while leaving total time spent on productive online activities unchanged.
IV long-difference empirical design: instrumenting household adoption with pre-ChatGPT exposure (2021 browsing); outcome measured as changes in categorized browsing durations (LLM-based classification into 'leisure' vs 'productive' sites); controls include demographic-by-region fixed effects and browsing composition controls.
Once efficiency is made explicit, the main practical question becomes how many efficiency doublings are required to keep scaling productive despite diminishing returns.
Framing/forecasting claim in the paper presenting an operational research question (conceptual; no empirical sample in excerpt).
The practical burden of scaling depends on how efficiently real resources are converted into that (logical) compute.
Argument in the paper linking conceptual 'logical compute' to real-world conversion efficiency (qualitative claim; no empirical sample in excerpt).
The compute variable is best understood as logical compute, an implementation-agnostic notion of model-side work.
Conceptual argument presented in the paper reframing 'compute' as an abstract, implementation-agnostic quantity (no empirical sample provided).
These patterns are consistent with a reorganization of the scientific production process rather than immediate efficiency gains, in line with theories of general-purpose technologies.
Interpretation linking observed changes in budget allocation, team size, and task breadth (from the proposal dataset and task-level analyses) to theoretical predictions about general-purpose technologies (GPTs); empirical findings show organizational change rather than large average short-run productivity gains.
This paper offers a forward-looking framework that emphasizes the decentralizing potential of AI on labor markets, moving beyond the traditional displacement-versus-creation dichotomy.
Paper's stated contribution; based on conceptual framework and synthesis of historical and contemporary analyses (no empirical validation presented in the abstract).
The emergence of artificial intelligence and robotics is catalyzing a profound transformation in the nature of human labor.
Stated as a central premise in the paper's abstract; supported by the paper's synthesis of economic history, contemporary labor market data, and analysis of digital platform growth (no specific datasets or sample sizes reported in the abstract).
AI agents are approaching an inflection point where the binding constraint shifts from raw capability to how work is delegated, verified, and rewarded at scale.
Conceptual argument presented in the paper's introduction/positioning; no empirical data, experiments, or sample reported.
The resulting AI safety profile is asymmetric: AI is bottlenecked on frontier research (novel tasks) but unbottlenecked on exploiting existing knowledge.
Theoretical implication of the novelty-bottleneck model distinguishing novel (human-judgment) vs. routine (covered by agent prior) components of tasks.
Wall-clock time can be reduced to O(√E) through team parallelism, but total human effort remains O(E).
Model-derived result showing parallelism across humans can speed wall-clock completion time while aggregate human effort does not drop asymptotically.
Better agents improve the coefficient on human effort but not the exponent (i.e., they reduce the constant factor but do not change the asymptotic scaling class).
Analytic result from the stylized model under the paper's assumptions about task decomposition and novelty fraction ν.
India's systematic investment plan (SIP) flows provide a high-frequency observable for the model's endogenous participation rate and constitute the natural empirical laboratory for the displacement–participation mechanism.
Empirical suggestion in the paper proposing SIP flows as an observable proxy for the modelled participation rate and recommending India as a lab to test the displacement–participation channel (no empirical test reported in the excerpt).
Three analytical results characterise non-linear financial fragility, regime-contingent risk premium divergence, and the general equilibrium alignment squeeze.
Stated analytical results in the paper derived from the theoretical model describing three named phenomena (non-linear fragility, regime-contingent divergence, alignment squeeze).
Whether AI is equity-bullish or equity-bearish depends on which channel dominates—a condition that differs sharply between deep financial markets, where the ARP is the dominant driver of elevated risk premia (Regime D), and shallow markets, where participation compression dominates (Regime E).
Model regime analysis in the paper distinguishing Regime D (deep markets, ARP-dominated) and Regime E (shallow markets, participation-compression-dominated) and stating comparative dominance determines net bullish/bearish outcome.
The equilibrium equity risk premium decomposes into three additively separable terms corresponding to these three channels (Proposition 1).
Formal proposition (Proposition 1) in the paper deriving an additive decomposition of the equilibrium ERP into the productivity, participation compression, and alignment risk terms.