Evidence (7953 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
The planner can raise social welfare by focusing technological progress on making goods cheaper that are disproportionately consumed by relatively poorer agents, thereby raising their real income.
Extension of the baseline model to multiple goods showing distributional gains via composition of price changes (real income channel).
When capital and labor are gross complements, a planner concerned with workers' welfare would favor capital-augmenting innovations to raise wages.
Analytical result from the model analyzing factor-augmenting technological progress and complementarity between capital and labor.
A planner with sufficient welfare weight on workers will impose positive robot taxes, with the tax rate increasing in the planner's concern for workers' welfare.
Application of the baseline model to robot taxation; analytical derivation of optimal robot tax under planner preferences.
As labor's economic value diminishes, steering progress focuses increasingly on enhancing human well-being (non-monetary aspects) rather than labor productivity.
Theoretical discussion and model results in the paper showing planner's shifting objective when labor is devalued.
The welfare benefits of steering technology are greater the less efficient social safety nets are.
Analytical result from the paper's theoretical model comparing a planner who can/cannot perform transfers and evaluating steering as second-best when redistribution is costly.
These household-level non-market productivity gains (ChatGPT making productive online tasks more efficient and freeing time for leisure) are economically large and likely constitute a substantial share of the overall economic impact of generative AI.
Combination of empirical IV estimates showing leisure increases and productivity-unchanged productive time, plus model-implied efficiency gains; authors' interpretation and welfare discussion in paper.
Mapping the empirical time-reallocation into a quantitative household time-allocation model implies generative AI approximately doubles the efficiency of productive online tasks for adopters; preferred calibration implies efficiency gains of 76%–176%.
Quantitative time-allocation model adapted from Aguiar et al. (2021); model uses empirical IV estimates for time reallocation and Engel curve elasticities estimated via IV (local precipitation shocks). Authors report implied efficiency gains of 76%–176% and state 'approximately doubles' efficiency.
Households predominantly utilize ChatGPT in the context of productive online activities (education, job search, informational research) rather than during leisure browsing, as inferred from the browsing context around ChatGPT use.
High-frequency analysis comparing 30-minute browsing intervals around ChatGPT visits to intervals of demographically similar non-users; LLM-based inference of website purpose; observed co-occurrence with productive-site categories.
ChatGPT adoption increases the leisure share of browsing duration by about 30 percentage points.
IV long-difference estimates from Comscore browsing data with LLM-based site classification; authors report a ~30 percentage point increase in leisure share after adoption.
In long-difference IV estimates, ChatGPT adoption raises total leisure browsing time by roughly 150 log points.
IV long-difference estimates using pre-ChatGPT exposure as instrument; reported effect described as 'roughly 150 log points' increase in total leisure browsing time.
A household's pre-ChatGPT ex-ante exposure (based on 2021 browsing composition) strongly predicts subsequent ChatGPT adoption: a 1 SD higher exposure predicts a 2.5 percentage point higher rate of having used ChatGPT by December 2024.
Constructed 'exposure' measure by aggregating site-level overlap with chatbot capabilities over household 2021 browsing; predictive regression (household-level) linking 1 SD change in exposure to 2.5pp higher adoption by Dec 2024 (statistic reported in paper).
ChatGPT adoption among private households has been rapid following release, but adoption is far from uniform.
Descriptive adoption patterns measured from Comscore browsing data over time (pre- and post-Nov 30, 2022) on the household panel (2021–2024); time-series of observed ChatGPT site visits and adoption rates.
Despite the diminishing returns they predict, progress in practice has often continued through rapidly improving efficiency, visible for example in falling cost per token.
Observed industry/empirical trend cited in the paper (example: falling cost per token). No numerical samples or sample size given in the excerpt.
Scaling laws are largely empirical and observational, but they appear repeatedly across model families and increasingly across training-adjacent regimes.
Paper asserts repeated empirical appearance across model families and training-adjacent regimes; claim is descriptive/observational without sample size in the excerpt.
Scaling laws make progress predictable, albeit at a declining rate.
Conceptual claim in the paper based on the power-law form of scaling laws (no numerical quantification or sample size provided in the excerpt).
Classical AI scaling laws, especially for pre-training, describe how training loss decreases with compute in a power-law form.
Stated observationally in the paper as established empirical regularity across pre-training runs and prior literature on scaling laws (no sample size or specific experiments reported in the excerpt).
Task-level analyses show that activities expanded in AI-enabled projects—particularly ideation and experimentation—are increasingly compatible with large language model capabilities, suggesting potential for future productivity gains as these technologies mature.
Task-level classification mapping tasks described in proposals to LLM-relevant capabilities using LLM-based classification; finding that tasks expanded in AI-enabled projects cluster on ideation and experimentation, which align with current LLM strengths.
AI-enabled projects undertake a broader set of tasks.
Task-level analysis of proposal descriptions (task inventories) classifying tasks via keyword extraction and LLMs, showing AI-enabled proposals list a wider variety of activities than non-AI proposals.
AI-enabled projects involve larger teams.
Comparison of team structure in proposals (team size) between AI-enabled and non-AI projects using the same comprehensive proposal dataset and LLM-based classification of AI presence.
AI-enabled projects reallocate resources toward human capital (i.e., shift budget allocations toward labor / human capital).
Analysis of detailed budget allocations in the proposal dataset, comparing projects identified as AI-enabled versus non-AI projects using keyword extraction and LLM classification to identify AI presence and role.
In the short run, AI adoption is associated with modest improvements in scientific outcomes concentrated in the upper tail.
Observational analysis linking identified AI presence in a comprehensive dataset of research proposals (funded and unfunded) to subsequent publication outcomes; AI presence identified via keyword extraction combined with large language model (LLM) classification; publication outcomes measured after proposal submission.
The experience-centered learning mechanism proactively recalls rewarded trajectories at inference time.
Specific technical/design claim about Synergy's learning mechanism; asserted in paper as a mechanism feature rather than demonstrated with quantified results in the provided text.
Synergy grounds collaboration in session-native orchestration, repository-backed workspaces, and social communication; identity in typed memory, notes, agenda, skills, and persistent social relationships; and evolution in an experience-centered learning mechanism that proactively recalls rewarded trajectories at inference time.
Detailed design claims describing Synergy's mechanisms and intended grounding for collaboration, identity, and evolution; presented as architectural description, no experimental evaluation provided in the excerpt.
We present Synergy, a general-purpose agent architecture and runtime harness for persistent, collaborative, and evolving agents on Open Agentic Web.
Paper's contribution statement indicating the authors propose an architecture named Synergy; this is a systems/design claim rather than an empirical result in the provided text.
The next generation of agents must become Agentic Citizens, defined by three requirements: Agentic-Web-Native Collaboration, participation in open collaboration networks rather than only closed internal orchestration; Agent Identity and Personhood, continuity as a social entity rather than a resettable function call; and Lifelong Evolution, improvement across task performance, communication, and collaboration over time.
Normative/design prescription from the authors; conceptual argument for three requirements rather than empirical validation.
As the internet prepares to host billions of such entities, it is shifting toward what we call Open Agentic Web, a decentralized digital ecosystem in which agents from different users, organizations, and runtimes can discover one another, negotiate task boundaries, and delegate work across open technical and social surfaces at scale.
Conceptual claim / framing by the authors describing a projected/ongoing shift; no empirical measurement of 'billions' or of ecosystem properties provided in the excerpt.
Embodied agents are spreading across smartphones, vehicles, and robots.
Author observation/claim in the paper's opening; no empirical study, metrics, or examples quantified in the provided text.
Open-source frameworks such as OpenClaw are putting personal agents in the hands of millions.
Author assertion naming OpenClaw and a numeric adoption claim; no supporting empirical data or citation contained in the provided text.
AI agents are rapidly expanding in both capability and population: they now write code, operate computers across platforms, manage cloud infrastructure, and make purchasing decisions.
Author assertion in paper's introduction / high-level observation; no empirical study, dataset, or experiment reported in the provided text.
IMDPs lower ESG rating uncertainty.
The paper constructs measures of ESG rating uncertainty and finds IMDP participation reduces rating uncertainty.
IMDPs reduce greenwashing.
The paper constructs measures of greenwashing and reports that IMDP participation lowers those greenwashing measures.
The positive effect of IMDP participation on ESG performance is stronger in capital-scarce industries.
Heterogeneity analysis by industry capital-scarcity reported in the paper indicating larger IMDP effects in capital-scarce industries.
The positive effect of IMDP participation on ESG performance is stronger for firms at the growth stage.
Heterogeneity analysis by firm life-cycle stage reported in the paper showing larger effects for growth-stage firms.
The positive effect of IMDP participation on ESG performance is stronger for firms under intense competitive pressure.
Heterogeneity analysis reported in the paper that splits the sample by measures of competitive pressure and finds larger effects for firms facing more intense competition.
The effect of IMDP participation on ESG performance operates through improved cost management, consistent with capability upgrading and resource reallocation toward sustainability-related activities.
Mechanism analyses reported in the paper linking IMDP participation to measures of cost management and interpreting this as capability upgrading/resource reallocation.
The effect of IMDP participation on ESG performance operates through higher innovation efficiency.
Mechanism analyses reported in the paper (mediation/decomposition analyses linking IMDP participation to measures of innovation efficiency).
IMDP participation increases ESG ratings by approximately 0.14 rating levels relative to comparable non-participating firms.
Quasi-natural experiment exploiting staggered rollout of IMDPs; propensity score matching combined with a multi-period difference-in-differences design using panel data on Chinese listed manufacturing firms from 2009 to 2022 (as reported in the paper).
Education and workforce development should shift focus from rote knowledge accumulation to cultivating skills in human-AI collaboration, creative problem-solving, and the design of novel economic domains.
Normative policy recommendation derived from the paper's framework and analysis of anticipated labor market changes (no empirical evaluation or trial data reported in the abstract).
Human-AI co-evolution will significantly increase individual productivity and open new frontiers of economic activity.
Projected outcome based on combined analysis of AI capabilities, historical patterns, and platform growth; the abstract does not report empirical measurement or sample sizes for this projection.
AI-driven productivity augmentation dramatically lowers the barriers to creating economic value, enabling the decentralized generation of employment.
Argument supported by paper's analysis of contemporary labor market dynamics and the growth of digital platforms; no quantified empirical estimates or sample sizes provided in the abstract.
The transition to an AI-civilization will fundamentally restructure the mechanisms of employment creation from a centralized model (few organizations creating jobs for the many) to a decentralized ecosystem where individuals are empowered to generate their own employment opportunities.
Central thesis of the paper, motivated by theoretical argumentation and synthesis of contemporary data on labor markets and digital platforms (no empirical test or sample sizes specified in the abstract).
Historical precedents from past technological revolutions suggest that innovation tends to expand, rather than shrink, the scope of economic activity and employment in the long run.
Paper draws on analysis of economic history (qualitative historical analysis implied; no specific historical datasets or sample sizes provided in the abstract).
The paper studies principal-agent alignment using revealed preference techniques.
Stated methodological approach in the abstract; implies analytical use of revealed-preference methods for identification.
The AI's alignment (similarity of human and AI preferences) can be generically identified in the field setting, where only AI choices are observed.
Analytical/theoretical identification result presented in the paper using revealed preference techniques (as stated in abstract); no empirical sample reported in the abstract.
The AI's alignment (similarity of human and AI preferences) can be generically identified in the laboratory setting, where both human and AI choices are observed.
Analytical/theoretical identification result presented in the paper using revealed preference techniques (as stated in abstract); no empirical sample reported in the abstract.
The paper introduces the Luce Alignment Model, where the AI's choices are a mixture of two Luce rules, one reflecting the human's preferences and the other the AI's.
Paper proposes and defines a new theoretical model (model specification described in abstract).
Human decision makers increasingly delegate choices to AI agents.
Stated as motivation in the abstract; no empirical data or sample described in the provided text.
By formalizing the end-to-end transaction model together with its asset and incentive layers, EpochX reframes agentic AI as an organizational design problem focused on infrastructures where verifiable work leaves persistent, reusable artifacts and value flows support durable human-agent collaboration.
Theoretical framing and normative claim in the paper; no empirical evaluation demonstrating that this reframing yields measurable benefits.
Credits lock task bounties, allow budget delegation, settle rewards upon acceptance, and compensate creators when verified assets are reused.
Functional description of the credit mechanics and settlement rules within the proposed EpochX marketplace; presented as part of system design without empirical settlement or user-behavior data.
EpochX introduces a native credit mechanism to make participation economically viable under real compute costs.
Proposed economic/incentive mechanism described in the paper; no empirical cost analysis, pricing model validation, or participant economic outcomes reported.