Evidence (1286 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Inequality
Remove filter
The federal U.S. approach to AI governance combines export controls for key AI hardware/software with a relatively permissive domestic regulatory stance that relies on executive guidance, voluntary standards, and sector-specific measures rather than comprehensive federal worker protections.
Comparative policy and legal review of federal-level instruments (export control lists, executive orders, agency guidance, proposed/final rules) described in the commentary; no primary empirical data or sample size.
The report has limited primary quantitative impact evaluation and relies on policy texts and secondary sources rather than large-scale empirical measurement of AI’s economic effects.
Explicit limitations section in the report describing methods and data constraints.
Methodological needs for AI-era labor models include dynamic skill taxonomies, high-frequency labor data (job postings, firm-level automation measures), and uncertainty quantification.
Paper's Research & policy recommendations and Methodological needs section (explicit recommendations).
The scenario analysis framework varies economic growth, automation rates, policy interventions, and investment to produce probabilistic demand–supply gaps.
Methods description of scenario analysis components and the variables varied in scenario experiments (explicit in Data & Methods).
Intended users of the Hub include organizations, educational institutions, and policymakers to inform reskilling/education strategies, regional economic policy, and labor-market interventions.
Explicit statement of target users and use cases in the Key Points / Implications sections.
The system produces interpretable outputs for stakeholders: demand–supply trend analysis, geospatial hotspot maps, skill-gap radar charts, and policy simulation dashboards.
Paper's description of outputs and interactive visual analytics (listed output modalities).
The core modeling approach uses probabilistic growth modeling combined with intelligent skill synthesis to estimate future workforce requirements under alternative economic and policy scenarios.
Methods section describing the modeling components: probabilistic growth modeling and intelligent skill synthesis (architectural description).
The platform integrates multiple indicators such as regional economic growth projections, automation velocity, policy intervention strength, investment intensity, and market volatility (macro- and micro-level indicators).
List of input indicators given in the Data & Methods section of the paper (explicit enumeration of macro and micro variables).
The paper's conclusions are limited by reliance on secondary sources, heterogeneous cross‑study comparisons, limited causal identification of long‑run macro effects, and measurement challenges for AI‑driven intangible capital.
Authors' stated limitations section summarizing the nature of evidence used (qualitative literature review, secondary macro indicators, sectoral examples); this is an explicit self‑reported methodological limitation rather than an external empirical finding.
Priority research areas include evaluating long‑run distributional impacts of AI diffusion in agriculture, interactions between digital technologies and labor markets, inclusive financing models for adoption, and macroeconomic effects on food prices and trade.
Stated research agenda and gap analysis in the paper’s conclusions, derived from the review of existing literature and identified gaps.
The current evidence base has gaps: more rigorous impact evaluations, long‑term soil and emissions accounting, and studies on distributional outcomes are needed.
Meta‑assessment within the paper noting limitations of existing literature (many short‑term pilots, limited long‑run soil/emissions data, few studies on who captures value); the claim is based on the review's appraisal of methods used in cited studies.
Economists and policymakers should fund long‑run evaluations (RCTs, quasi‑experimental designs) to estimate causal effects of AI interventions on productivity, welfare, and environmental outcomes.
Evidence‑gap analysis and policy recommendations in the paper; explicit call for rigorous impact evaluation methods given current paucity of long‑run causal evidence.
There are limited long‑run randomized controlled trials (RCTs) on AI/IoT impacts for smallholders and scarce cross‑country data on distributional effects.
Literature review and evidence‑gap identification within the study; explicit statement that long‑run RCTs and cross‑country distributional data are scarce.
Heterogeneous contexts mean impacts vary; careful piloting, monitoring, and adaptive policy are necessary to manage uncertainty in outcomes.
Synthesis and explicit discussion of uncertainties; evidence gaps section noting variable results across regions and interventions.
This paper is a narrative review synthesizing heterogeneous studies and case reports rather than providing meta-analytic estimates of effect sizes.
Methods statement in the paper describing review type as narrative synthesis and noting limitations (no meta-analysis).
Measurement and research gaps (data scarcity, informality) complicate robust economic assessment of AI impacts; improved metrics, granular labour and firm‑level data, and mixed‑methods evaluation are required.
Methodological critique based on reviewed literature and identified gaps; no new data collection in the paper.
There is a need for causal, longitudinal studies on how AI‑enabled fintech affects women's portfolio outcomes and on algorithmic interventions designed to reduce gender gaps.
Explicit statement in the paper noting limitations of existing literature (heterogeneity, limited longitudinal causal evidence, possible platform sample selection).
Child-specific surveillance across human, animal, and environmental domains is sparse, limiting understanding of pediatric One Health risks.
Authors' methodological assessment based on literature search and review; explicit limitation stated that standardized child-focused surveillance data are lacking and heterogeneous across sectors.
The legal arguments create some uncertainty about scope and enforcement timelines; economic actors will respond to expected enforcement probabilities and expected sanctions, so clarity from regulators or courts will shape the ultimate economic effects.
Doctrinal acknowledgement of legal uncertainty combined with standard economic modeling of regulatory expectations; no empirical modeling in the Article.
The paper is primarily legal/policy scholarship rather than an empirical assessment of the prevalence or magnitude of discrimination in EdTech; it does not provide econometric estimates of harm.
Explicit limitation noted in the Article (self‑reported).
The Article's evidence consists of illustrative case law and statutory text rather than empirical datasets; it builds doctrinal chains, hypotheticals, and applications of statutory language to modern procurement and EdTech deployment models.
Explicit description of evidence and limits in the Article (self‑reported).
Methodologically, the paper uses doctrinal legal analysis and policy argumentation — close reading of federal civil‑rights statutes, administrative guidance, and judicial decisions interpreting 'recipient' and 'federal financial assistance.'
Explicit methodological statement in the Article (self‑reported).
The legal argument is grounded in statutory interpretation and precedent about the scope of 'recipient' and how federal financial assistance flows and influence should be understood.
Doctrinal analysis of statutes, administrative guidance, and judicial decisions cited and discussed in the Article.
Empirical validation of the book’s proposals would require complementary case studies, model documentation, and outcome measurements.
Author/reviewer recommendation in the blurb about methodological limitations and next steps; not an empirical finding.
The book is predominantly conceptual and policy-analytic and uses illustrative case vignettes rather than presenting a single empirical study.
Explicit methodological description in the Data & Methods blurb: synthesis of technical ideas, governance requirements, and illustrative vignettes; no empirical sample or experimental protocol described.
The research program is grounded in 12 years of forensic legal research spanning 2014–2026.
Author-stated research timeline and methodology (2014–2026 forensic legal research).
The protocol is underpinned by a forensic audit of approximately 4,200 specialized texts (legal doctrine, regulation, standards, technical literature).
Stated corpus and audit in the Methods section: ~4,200 texts reviewed as part of the forensic audit.
The protocol systematizes arguments for 16 projected rulings at Mexico’s Supreme Court (SCJN) to anchor the proposed rights and rules in constitutional practice.
Doctrinal projection and constitutional strategy section of the compendium describing 16 projected SCJN rulings (method: legal projection/modeling).
The compendium’s findings and recommendations are based on a forensic audit of approximately 4,200 specialized texts covering doctrine, jurisprudence, regulation and technical literature.
Stated methodological claim in the compendium: forensic corpus audit of ~4,200 texts (sample size reported).
Limitations of the review include the small sample of studies, uneven geographic coverage, heterogeneity in methods across studies, and limited long‑run evidence (especially on generative AI), which complicate causal aggregation.
Author-reported limitations based on the meta-assessment of the 17 included studies (variation in methods, contexts, and time horizons).
Design of this work: a systematic literature review and meta‑synthesis of empirical findings from peer‑reviewed journals (2020–2025), based on 17 publications.
Stated methods and inclusion criteria of the paper: systematic review of peer‑reviewed literature (sample = 17).
Long-term evidence on generative AI’s structural labor‑market effects is scarce; few longitudinal studies exist.
Assessment of study horizons and methods among the 17 papers indicates limited long-run and longitudinal analyses specifically on generative AI impacts.
Empirical coverage is limited for low‑income countries; evidence from such settings is scarce.
Geographic distribution of the 17 reviewed studies shows concentration in advanced economies with few or no studies focused on low-income countries.
The literature shows a surge in research activity on AI and labor markets in 2023–2025 and a concentration of studies in advanced economies.
Meta-analytic summary of the publication years and geographic focus among the 17 selected publications (temporal and geographic count of included studies).
Results depend on accurate skill extraction from vacancy texts and valid measures of occupational exposure/complementarity; causal interpretation of diffusion effects may be limited by endogeneity (e.g., technology adoption responding to labor-market conditions).
Authors' stated methodological limitations: reliance on text-analysis identification of skills and on constructed measures of exposure/complementarity; acknowledgement of endogeneity concerns limiting causal claims.
The paper proposes two conceptual models (AI/ML‑Driven Labor Market Transformation Model and Sectoral Impact and Resilience Model) to organize heterogeneous findings and generate testable hypotheses about how AI reshapes labor across sectors and skill levels.
Conceptual synthesis integrating Technological Determinism, Socio‑Technical Systems Theory (STS), and Skill‑Biased Technological Change (SBTC); the models are theoretical outputs of the review used to map mechanisms and heterogeneity rather than empirical findings.
There are substantial measurement and identification gaps in the literature: heterogeneity in measuring 'AI adoption', limited long‑run causal evidence, and geographic bias toward advanced economies.
Methodological assessment within the review noting variability across studies in AI measures (patents, investment, task exposure proxies), paucity of long‑run causal designs, and concentration of empirical studies in advanced economies; this is a meta‑evidence limitation statement.
Quasi-experimental designs (difference-in-differences, instrumental variables, event studies) and panel regressions are useful methods for identifying causal effects of AI adoption where plausibly exogenous variation exists.
Methodological summary in the paper listing common empirical strategies used in the literature to estimate causal impacts of technology adoption.
Current research is limited by measurement challenges in capturing AI capabilities and firm-level adoption, and by a lack of longitudinal worker-firm data and causal identification in many settings.
Explicit limitations noted by the paper: gaps in task measures, scarce longitudinal linked datasets, and methodological challenges in causal inference.
This paper's approach is qualitative and based on secondary literature synthesis; it does not collect primary survey, experimental, or administrative data.
Explicit statement in the Data & Methods section of the paper.
Key empirical gaps remain: better measurement of K_T (AI/software capital), more granular matched employer‑employee and wealth data, and improved estimates of task-substitution elasticities are required to precisely quantify incidence and policy impacts.
Authors’ stated research agenda and limitations section, including sensitivity analyses showing outcome variation with parameter choices and measurement uncertainty.
Endogenous structural break analysis identifies 2007 as the break year for AI introduction in India.
Empirical analysis reported in the paper using an endogenous structural break test applied to relevant time-series data (paper states 2007 was identified as the break year).
A shift in preference towards non-traded AI services exacerbates income inequality among previously homogeneous workers in the non-traded sector (model finding).
Results from the paper's Finite Change General Equilibrium (theoretical) model which introduces AI as a shock in the non-traded sector and analyzes effects via price adjustments.
Artificial intelligence (AI) induced services are a reality in India and other developing countries.
Statement in paper citing existence/emergence of AI-powered services (examples given: Windows Live, AI ride-hailing apps such as Ola and Uber); descriptive assertion rather than quantified empirical analysis in the paper.
EcoThink offers a scalable path toward a sustainable, inclusive, and energy-efficient generative AI Agent.
Concluding claim in the paper asserting broader impact and scalability of the proposed method (position/interpretive claim based on reported results).
Extensive evaluations were performed across 9 diverse benchmarks.
Statement in the paper that evaluations were run on 9 benchmarks (as stated in the abstract).
EcoThink employs a lightweight, distillation-based router to dynamically assess query complexity, skipping unnecessary reasoning for factoid retrieval while reserving deep computation for complex logic.
Methodological description of the proposed framework in the paper (design/architecture claim).
EcoThink reduces inference energy by up to 81.9% for web knowledge retrieval.
Experimental result reported in the paper (maximum observed reduction for the web knowledge retrieval benchmark, as stated in the abstract).
EcoThink reduces inference energy by 40.4% on average across 9 diverse benchmarks.
Experimental evaluations reported in the paper across 9 benchmarks comparing inference energy of EcoThink versus baseline (as stated in the abstract).
Integrating AI into financial ecosystems can strengthen both economic and climate resilience, provided that regulatory frameworks, ethical AI practices, and capacity-building measures are simultaneously addressed.
Paper's concluding recommendation based on combined qualitative and quantitative findings from the three case studies and the 1,500 interviews; framed as conditional policy guidance in the abstract.