Evidence (2432 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Labor Markets
Remove filter
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.
Open-source orchestration and evaluation harnesses plus a self-contained evaluation pipeline improve reproducibility for the Speedrunning Track.
Paper claims and documents the release of orchestration and evaluation code and describes the self-contained pipeline designed for deterministic reproducible evaluation.
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
Distilling corrected decision trajectories into the model via supervised fine-tuning produces better recovery behavior than relying solely on reward signals or final-outcome optimization.
Comparative training setup where LEAFE uses supervised fine-tuning on corrected trajectories and is empirically compared to outcome-driven methods (e.g., GRPO) that optimize rewards; improved Pass@k reported.
LEAFE's gains occur across diverse interactive coding and agentic tasks with limited interaction budget.
Reported evaluation across a suite of long-horizon tasks (examples include multi-step coding problems and agentic tasks with rich feedback channels) with consistent improvements claimed.
LEAFE uses the same environmental interactions more effectively, improving sample efficiency under fixed interaction budgets.
Experimental regime with fixed interaction budgets demonstrating higher Pass@k for LEAFE relative to baselines given the same number of environment interactions; paper argues LEAFE converts richer feedback into targeted training signals rather than only final rewards.
LEAFE converts rich environment feedback into actionable corrective supervision rather than optimizing only final success signals, which drives performance gains.
Algorithmic description: LEAFE summarizes error messages/intermediate observations into experience items, backtracks to causal decision points, explores corrective branches, and distills corrected trajectories via supervised fine-tuning. Empirical comparisons show improved Pass@k relative to reward-only/outcome-driven baselines.
Overall conclusion: forecast-then-execute (anticipatory trajectory reasoning) is an effective principle for building multimodal agents capable of reasoning, planning, and acting in complex environments.
Paper's Conclusion in the provided summary asserts this, based on the reported experimental comparisons and the two-stage TraceR1 framework.
The paper reports improvements in planning stability (consistency of multi-step plans), execution robustness (success under environment/tool variability), and generalization (out-of-distribution tasks and unseen tool/environment states).
Reported outcomes in the summary explicitly list these three improvement categories; the specific metrics and magnitudes are not provided in the summary.
Compared to reactive agents that optimize actions stepwise without trajectory anticipation, TraceR1 yields better multi-step planning and execution.
Baselines & comparisons described in the summary include reactive agents; the paper reports improvements of TraceR1 relative to these baselines across the benchmarks (no numeric values in the provided text).
Explicit anticipatory (trajectory-level) reasoning is a crucial design principle for reliable multi-step task performance in complex real-world environments.
Paper reports comparisons between anticipatory (trajectory-forecasting) agents and reactive / single-stage baselines, concluding the anticipatory design yields better multi-step reliability; exact experimental details and statistics not included in the provided summary.
TraceR1 materially improves planning coherence, execution robustness, and generalization in multimodal, tool-using agents versus reactive or single-stage baselines.
Reported evaluation across seven benchmarks (online and offline computer-use, multimodal tool-use reasoning) comparing TraceR1 to reactive agents and single-stage RL baselines; summary states 'substantial gains' though no numerical results are provided in the provided text.
Policy instruments that can support shorter workweeks include tax incentives for firms that maintain pay while reducing hours, regulatory transition frameworks, and conditionality on AI subsidies or public procurement tied to job-preservation or reduced hours.
Policy-analytic argument drawing on standard policy toolkits and selected prior examples; no new policy pilot results presented.
Shorter workweeks help sustain consumer purchasing power by reducing aggregate labor supply and thereby distributing automation gains more equitably.
Theoretical labour-supply reasoning plus historical case studies of work-time reductions; argumentual and normative rather than demonstrated with new macroeconomic empirical tests in AI-rich settings.
A gradual, policy-driven reduction in the standard workweek can absorb labor displaced by automation, help maintain employment levels, and preserve wages per hour.
Synthesis of prior empirical findings on work-hour reductions and historical precedents (e.g., six-day to five-day transition); no new randomized or large-scale contemporary trials presented.
Firms use layoffs strategically to signal efficiency and boost short-term stock prices, even when automation is not fully substitutive.
Organizational- and finance-literature synthesis on signaling and market reactions to cost-cutting; historical/case examples referenced rather than new econometric estimates.
Employers are increasingly demanding digital literacy, basic data competencies, and stronger communication and interpersonal skills.
Employer survey analysis tracking changes in required skills; descriptive summary of survey frequencies and employer-reported skill priorities. Survey sample size and representativeness not specified in summary.
Some occupations experience efficiency and productivity gains where AI complements tasks, implying complementarity effects for those jobs.
Qualitative case studies of firms and employer survey reports documenting productivity/efficiency improvements in certain roles following AI adoption; descriptive analysis of sectoral/occupational outcomes. Quantitative magnitude not specified.
Policymakers should prioritize retraining programs, strengthened social protection, and redistributive policies to mitigate automation-induced unemployment and inequality.
Policy recommendation based on the author's synthesis of risks and expert judgment; not based on an empirical intervention study in the paper.
There has been progress in software import substitution, contributing to partial technological sovereignty in Russia.
Use of statistics on software import substitution (authors reference national statistics but do not report detailed numbers or methodology).
Digitalization enables management optimization (improved management processes and decision-making) in Russian enterprises and public administration.
Qualitative analysis of policy documents and expert assessment by the author; no empirical evaluation or quantified effect sizes provided.
Digitalization has produced measurable labor productivity growth in segments of the Russian economy.
Author's interpretation drawing on national statistics and strategic documents; statistical details (period, sectors, sample sizes) not specified in the paper.
Policy implication: prioritize large-scale, targeted reskilling and lifelong learning programs to enable workforce adaptability and capture AI complementarity gains.
Policy recommendations derived from the paper's findings (association between AI adoption and skill shifts, heterogeneous sectoral impacts) and the literature synthesis that links reskilling interventions to better labor outcomes; recommendation is prescriptive rather than empirically tested within the study.
The paper provides empirical support for the complementarity hypothesis: AI tends to reconfigure jobs and create hybrid roles rather than eliminate employment wholesale.
Convergence of simulated sectoral employment patterns (some sectors showing net gains and hybrid-role growth), the strong correlation between AI adoption and skill shifts (r = 0.71), and corroborating studies from the literature synthesis emphasizing augmentation and hybridization mechanisms.
Institutional reskilling programs and governance frameworks markedly moderate labor-market outcomes: better frameworks correlate with more complementarities and lower net job loss.
Integration of literature-derived mechanisms with simulated empirical patterns; paper reports correlations/moderation-style comparisons across simulated sector-year cases incorporating policy/institutional variables (described in methods), supported by studies in the systematic review linking policy interventions to labor outcomes.
Healthcare and IT Services experienced net employment gains consistent with AI complementarity (augmented tasks and creation of new hybrid roles).
Simulated sectoral employment trends and net-change metrics for Healthcare and IT Services (2020–2024) presented in the paper, supported by literature synthesis examples showing human–AI complementarities in these sectors.
The largest rises in hybrid jobs occurred in IT Services and Healthcare.
Sectoral decomposition of hybrid job share trends in the simulated dataset across the seven industries (2020–2024) and supporting qualitative/quantitative findings from the literature synthesis focused on IT Services and Healthcare.
Hybrid human–AI jobs increased substantially across all seven analyzed sectors between 2020 and 2024.
Descriptive trend analysis of the simulated dataset's hybrid job share metric (fraction of roles reclassified as human–AI hybrid) for the seven industries over 2020–2024, combined with corroborating examples from the literature synthesis (selected ACM/IEEE/Springer studies 2020–2024).
A matching/ranking algorithm that scores candidate-job pairs by skill fit and predicted remuneration (and proximity) improves the alignment of workers to short-term gigs.
System incorporates a ranking algorithm combining inferred-skill fit, predicted wages, and proximity constraints; pilot comparison reported improved matches, but quantitative algorithmic performance metrics are not provided in the summary.
ML models can continuously derive available gigs and demand signals from marketplace activity, producing up-to-date opportunity lists and predicted wages.
Implemented ML models ingest real-time market activity/platform signals in the pilot to generate opportunity lists and wage predictions; no reported out-of-sample accuracy or prediction error metrics in the summary.
Skills can be inferred from multiple nontraditional inputs—self-reported information, short-term work histories, and community recommendations—creating richer profiles beyond formal work experience.
System design uses NLP to normalize and extract skills from profiles, short-term work records, and community recommendations; claim is supported by the implemented data integration approach rather than by quantified external validation in the summary.
The pilot implementation produced higher reported wages for youth matched through the system relative to baseline informal methods.
Pilot comparison reported higher reported wages for matched youth; summary lacks sample size, measurement protocol, and statistical inference.
The pilot implementation led to higher correct matches compared to existing informal search methods.
Pilot deployment compared matching accuracy versus baseline informal job-search approaches; the paper summary reports a 'marked increase' but provides no numerical details, sample size, or significance levels.