Evidence (2450 claims)
Adoption
5187 claims
Productivity
4472 claims
Governance
4082 claims
Human-AI Collaboration
3016 claims
Labor Markets
2450 claims
Org Design
2305 claims
Innovation
2290 claims
Skills & Training
1920 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 437 | 982 |
| Governance & Regulation | 366 | 172 | 114 | 55 | 717 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 290 | 115 | 66 | 27 | 502 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 121 | 85 | 14 | 332 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 68 | 8 | 28 | 6 | 110 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 74 | 5 | 4 | 1 | 84 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 15 | 9 | 5 | 47 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Labor Markets
Remove filter
Liability for harm from AI remains unresolved; current regulatory frameworks (notably in the EU) continue to emphasize human responsibility and require conformity and clinical validation.
Regulatory and legal analyses, with emphasis on European Union device regulation and liability principles, as reviewed in the paper.
State-level advances in worker-protective AI measures exist but are uneven and many proposed state bills aimed at strengthening workers’ rights related to AI have stalled.
Review of state legislative proposals and enacted laws as compiled in the commentary (state-level policy scan); no systematic quantitative legislative count or sample reported.
Research priorities include causal studies on productivity gains from AI, firm‑level adoption dynamics, sectoral labor reallocation, long‑run general equilibrium effects, and heterogeneous impacts across regions and demographic groups.
Set of empirical research recommendations drawn from gaps identified in the literature review and limitations section; not an empirical claim but a prioritized research agenda based on secondary evidence.
Growth‑accounting frameworks and measurement approaches must be updated to capture AI/robotics as intangible and embodied capital, including quality improvements and spillovers.
Methodological argument grounded in literature on measurement challenges and examples of intangible capital; no new measurement exercise or empirical re‑estimation is provided in the paper.
Backtesting the proposed models against historical technological transitions (e.g., ATMs, robotics) and recent AI adoption episodes can validate model performance.
Recommended validation strategy; paper does not report backtest results but prescribes holdout/pseudo‑counterfactual experiments and calibration with administrative outcomes.
Scenario modelling in the reviewed literature typically uses counterfactual simulations with different adoption speeds, policy responses, and initial conditions to bound possible employment, wage, and productivity trajectories.
Description and citations of scenario-modelling practices by think tanks and organisations (TBI, IPPR, IMF) and academic work referenced; evidence is methodological and report-based.
NLP/LLM pipelines are used to extract tasks and skills from free-text job ads and to map those tasks to AI capabilities.
Described methods and citations (Xu et al., 2025; Hampole et al., 2025); evidence is methodological application of transformer-based models to job-ad text in recent studies.
Methods increasingly apply advanced NLP and large language models (BERT, LSTM, GPT-4) to parse job descriptions, map skills/tasks, and predict automation risk.
Cited methodological examples in the paper (Xu et al., 2025; Hampole et al., 2025) and discussion of common pipelines using transformer-based models to extract tasks from free-text job ads and to map tasks to AI capabilities; evidence is methodological and based on recent studies rather than a single benchmarked dataset.
A centralized policy engine for access control, data handling rules, and change management is a necessary control point in the reference pattern.
Prescriptive recommendation in the paper supported by best-practice synthesis and case anecdotes; no direct empirical comparison of centralized vs federated policy engines provided.
Realizing AI’s potential for circular-economy and energy-efficiency goals requires coordinated interventions across environmental regulation, digital infrastructure, and workforce skill formation.
Policy interpretation drawn from heterogeneity results (regulation and infrastructure amplify AI effects) and the identified labor-market mechanism (skill composition matters); recommendation rather than direct causal estimate.
The benefits of AI-enabled e-commerce and automated warehousing are conditional on complementary policies (competition policy, data governance, workforce reskilling, automation oversight) to manage concentration, privacy, distributional effects, and safety.
Policy-analysis synthesis supported by sensitivity checks in scenario analyses and discussion of governance risks; recommendations informed by observed distributional and market-concentration patterns in the case material.
AI’s net impact on employment to date is modest — no clear evidence of mass unemployment.
Systematic literature review/meta-synthesis of 17 peer‑reviewed publications (published 2020–2025). Aggregate assessment across those studies found no consistent empirical support for large-scale, economy-wide unemployment attributable to AI to date.
The growth of digital platforms contributes to the decentralization of job creation.
Paper cites contemporary data on the growth of digital platforms as part of its analysis (no specific platform-level datasets or sample sizes cited in the abstract).
Drawing on analysis of agentic investment firm operational models demonstrating 50-70% cost reductions while maintaining fiduciary standards.
Internal analysis/modeling of agentic investment firm operational models reported by the authors; paper states the 50–70% cost reduction result but provides no sample size or detailed empirical validation in the provided text.
It is optimal to start taxing AI when cognitive workers start to consider switching to manual jobs.
Analytical result derived from the extended dynamic taxation model and its comparative-static/optimal-policy analysis; the timing rule for introducing an AI tax follows from the model's equilibrium conditions and welfare optimization.
JobMatchAI provides factor-wise explanations through resume-driven search workflows.
Paper states that the system gives factor-wise explanations and ties them to resume-driven workflows; the excerpt references interpretable reranking and demo artifacts but does not include user study or explanation-faithfulness metrics.
JobMatchAI optimizes utility across skill fit, experience, location, salary, and company preferences.
Paper claims the system's objective/utility function includes these factors and that the reranking/optimization accounts for them. No optimization algorithm details, weighting, or empirical utility gains are given in the excerpt.
JobMatchAI is production-ready.
Paper explicitly describes JobMatchAI as "production-ready" and also claims a hosted website and installable package (artifacts consistent with deployment readiness). No formal certification, deployment metrics, or uptime/performance SLAs are provided in the excerpt.
Main drivers of attrition identified by the model are overtime, business-travel frequency, and promotion opportunities (each having higher influence than salary).
Feature importance analyses using permutation importance and aggregated SHAP values on the fitted logistic-regression model trained on the IBM HR Analytics dataset.
Non-monetary workplace factors (excessive overtime, frequent business travel, limited promotion opportunities) are stronger predictors of individual attrition risk than salary.
Interpretable logistic-regression model trained on the IBM HR Analytics dataset; global importance assessed using aggregated SHAP values and permutation importance to rank predictors. (Exact sample size and numeric importance ranks not provided in the summary.)
Generative AI functions as a socio‑technical intermediary that facilitates interpretation, coordination, and decision support rather than merely automating discrete tasks.
Thematic analysis and co‑word linkage between terms related to interpretative work, coordination, and decision‑support and technical GenAI terms within the corpus.
The literature indicates a managerial shift away from hierarchical command‑and‑control toward guide‑and‑collaborate paradigms, where managers curate, guide, and coordinate AI‑augmented teams rather than micro‑manage tasks.
Synthesis of themes from the 212‑paper corpus (co‑word and thematic analyses) showing recurrent managerial/behavioural concepts such as autonomy, coordination, and decision‑support tied to GenAI discussions.
Higher educational attainment is positively associated with greater willingness to keep working before retirement.
Multivariate regression analysis of the cross-sectional survey (n=889) using education level as a key explanatory variable.
Male gender is positively associated with higher willingness to remain employed before retirement.
Multivariate regression on the survey sample (n=889) including gender as an explanatory variable, controlling for demographic and socioeconomic covariates.
Policy responses (active labor-market interventions, reskilling, lifelong learning, social insurance, redistribution) are needed to manage transitional inequality caused by AI-driven structural shifts in labor demand.
Policy implication drawn from reviewed empirical and theoretical literature on labor-market transitions and distributional impacts; presented as a recommendation without new empirical evaluation in this paper.
Economists should refine methods to measure AI adoption and incorporate AI-driven productivity gains into growth accounting while accounting for measurement challenges (quality change, task reallocation).
Methodological recommendation based on the review's identification of measurement difficulties in the existing empirical literature; the paper itself provides conceptual guidance rather than new measurement results.
AI has materially increased operational efficiency and productivity in industry, changing production processes and firm organization.
Qualitative integration of prior empirical studies and firm-level case studies cited in the literature review (industry analyses, adoption case examples); the paper itself does not provide new quantitative estimates or causal identification.
Immediate research priorities for AI economists include: field experiments testing NLP‑driven acquisition/personalization (measuring CAC, LTV, retention, consumer welfare); structural/empirical models of adoption that include data access costs and complementarities; and analyses of privacy regulation impacts on external text data availability and value.
Authors' set of recommended research directions derived from identified gaps in the systematic review and implications for AI economics.
Unit costs for bookkeeping and compliance tasks are likely to fall, potentially affecting professional services pricing and leading to consolidation.
Analytic inference from case advantages and industry literature; no empirical market-wide cost study included.
Generative AI can raise labor productivity in finance and tax, shifting work from routine processing to oversight, exceptions handling, and higher-value analysis.
Analytical framing supported by case observations and literature; presented as an expected economic effect rather than measured across a population.
Successful deployment requires new human capital: finance professionals with AI literacy, data governance, model validation, and control expertise.
Paper's labor and skills implications derived from case examples and analytic framing; recommendation-based observation rather than measured workforce data.
Generative AI provided better decision support via scenario analysis and anomaly prioritization.
Descriptive case examples and literature indicating use of LLMs and RAG systems for drafting scenarios and prioritizing anomalies; evidence is qualitative and illustrative.
Generative AI adoption produced cost savings through labor reallocation and task automation.
Qualitative evidence from Xiaomi and Deloitte case analysis and analytic framing suggesting lower labor requirements for routine tasks; no standardized cost-accounting or sample-wide cost metrics provided.
Using generative AI led to higher consistency and reduced human error in repetitive finance/tax tasks.
Case-driven qualitative observations from the two organizational examples and literature synthesis indicating reduced variability in repetitive processes when AI-assisted.
Generative AI deployment increased processing speed and throughput for routine finance and tax tasks.
Observed improvements reported in case studies (Xiaomi and Deloitte) and corroborating industry/literature sources described in the paper; qualitative descriptions rather than standardized time-motion metrics.
Applying generative AI within corporate financial sharing centers (illustrated by Xiaomi’s Financial Sharing Center) and professional services firms (Deloitte) materially improves the efficiency and accuracy of finance and tax operations.
Qualitative case analysis of two organizations (Xiaomi Financial Sharing Center and Deloitte) supplemented by literature review and analytical mapping; no large-scale quantitative measurement reported.
Phased deployment and regulatory sandboxes can lower barriers for startups to pilot lower-risk applications, thereby shaping innovation trajectories.
Comparative policy analysis of sandboxing and phased deployment approaches in other jurisdictions; prescriptive inference without empirical testing in Vietnam.
Properly governed AI can yield large efficiency gains (reduced processing time and lower per-case costs), but those gains depend on redesigning legal processes to accommodate algorithmic workflows.
Analytic synthesis of administrative-process characteristics and AI capabilities; no primary quantitative evidence or measured effect sizes provided.
Establishing a graduated implementation model and clear regulatory pathways reduces regulatory uncertainty and makes public-sector AI procurement and private-market participation more predictable and attractive.
Normative recommendation informed by comparative institutional analysis and economic reasoning; not empirically tested in the paper.
A graduated implementation model—phased deployment, differentiated safeguards by risk, and mandatory human oversight for high-stakes decisions—can balance innovation with rule-of-law protections.
Normative framework development combining doctrinal findings and comparative lessons; prescriptive recommendation rather than empirical validation.
Comparative analysis of international frameworks reveals a range of institutional responses and regulatory instruments that Vietnam could adapt.
Comparative institutional analysis synthesizing governance approaches from liberal and civil-law jurisdictions (review of secondary sources and policy frameworks).
AI can substantially modernize administrative decision-making in civil-law systems (speed, consistency, scalability).
Qualitative doctrinal and comparative institutional analysis using Vietnam as a focused case study; no primary quantitative field data or sample size.
Adoption of AI feedback could lower marginal costs of delivering high-quality feedback and change fixed vs. variable cost structures for instruction delivery.
Economic implication discussed by workshop participants (50 scholars) as a theoretical possibility; no quantitative cost estimates in the report.
Generative AI can enable new feedback modalities (text, hints, worked examples, formative prompts) adaptable to content and learner needs.
Thematic conclusions from the interdisciplinary meeting of 50 scholars, describing possible modality generation capabilities of current generative models; no empirical modality-comparison data provided.
Immediate AI-generated feedback may sustain learner momentum and improve formative assessment cycles (timeliness & engagement).
Expert-opinion synthesis from structured workshop (50 scholars) identifying timely feedback as a potential pedagogical benefit; no empirical trials reported.
Large language and generative models can tailor explanations, scaffolding, and practice to learners' current states and preferences (personalization).
Workshop expert consensus and thematic synthesis from 50 interdisciplinary scholars; illustrative examples discussed rather than empirical evaluation.
Generative AI can produce real-time, individualized feedback at scale, potentially reducing per-student feedback costs and increasing feedback frequency.
Synthesis of expert perspectives from an interdisciplinary workshop of 50 scholars (educational psychology, computer science, learning sciences); qualitative small-group activities and thematic extraction. No primary experimental or quantitative cost data presented.
Agents learn from one another without curricula (agent-to-agent learning occurs organically in the ecosystem).
Naturalistic daily observations across platforms noting peer-to-peer agent interactions and apparent transfer of behaviors/knowledge; no controlled tests of learning or counterfactuals.
Agents form idea cascades and quality hierarchies without any centrally designed curriculum or intervention (emergent peer learning and spontaneous knowledge diffusion).
Observed interaction patterns across platforms showing cascades, hierarchies, and diffusion among agents in the qualitative dataset; documentation is comparative and observational rather than experimental.
A rapidly growing ecosystem of autonomous AI agents is producing organic, multi-agent learning dynamics that go beyond dyadic human–AI interactions.
Naturalistic, qualitative daily observations over one month across multiple agent platforms (reported platforms: Moltbook, The Colony, 4claw); coverage reported of >167,000 agents interacting as peers; comparative observational documentation rather than controlled experimentation.