Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
The paper's empirical approach is primarily qualitative and interpretive: a systematic literature review plus comparative qualitative case studies, using policy documents, public diplomacy examples, development initiatives, technology export and standards behaviour, and secondary empirical studies as evidence.
Methods section of the paper explicitly states the approach and evidence types; sample of four comparative cases (US, China, EU, Russia) is specified.
The paper demonstrates different mixes and institutional practices of smart power in practice by applying the framework to the United States, China, the European Union, and Russia.
Explicit comparative qualitative case studies of four major international actors (sample size: four cases) using policy documents, public diplomacy examples, and development/technology initiatives as illustrative evidence.
Empirical validation of the book’s proposals would require complementary case studies, model documentation, and outcome measurements.
Author/reviewer recommendation in the blurb about methodological limitations and next steps; not an empirical finding.
The book is predominantly conceptual and policy-analytic and uses illustrative case vignettes rather than presenting a single empirical study.
Explicit methodological description in the Data & Methods blurb: synthesis of technical ideas, governance requirements, and illustrative vignettes; no empirical sample or experimental protocol described.
The research program is grounded in 12 years of forensic legal research spanning 2014–2026.
Author-stated research timeline and methodology (2014–2026 forensic legal research).
The protocol is underpinned by a forensic audit of approximately 4,200 specialized texts (legal doctrine, regulation, standards, technical literature).
Stated corpus and audit in the Methods section: ~4,200 texts reviewed as part of the forensic audit.
The protocol systematizes arguments for 16 projected rulings at Mexico’s Supreme Court (SCJN) to anchor the proposed rights and rules in constitutional practice.
Doctrinal projection and constitutional strategy section of the compendium describing 16 projected SCJN rulings (method: legal projection/modeling).
The compendium’s findings and recommendations are based on a forensic audit of approximately 4,200 specialized texts covering doctrine, jurisprudence, regulation and technical literature.
Stated methodological claim in the compendium: forensic corpus audit of ~4,200 texts (sample size reported).
The evidence base is qualitative: the study uses conceptual framework synthesis, comparative analysis of multi-sector implementations, and case examples rather than randomized or large-sample empirical evaluation.
Methods and limitations section of the paper explicitly describing the evidence base and methods (qualitative synthesis, pattern extraction, cross-case lessons).
The paper presents a deployment pattern intended to be adapted by sector and regulatory context rather than a one-size-fits-all blueprint.
Explicit statement in the paper and the described pattern design; based on qualitative pattern extraction and prescriptive guidance.
Partial least squares structural equation modeling (PLS-SEM) was used to test hypothesized direct, mediated, and moderated paths.
Methods/analysis section states PLS-SEM was the statistical approach to estimate paths, mediation, and moderation effects.
The study employed a 2 × 2 between-subjects experimental design manipulating (1) identity disclosure (transparent vs. nondisclosed) and (2) conversational tone (empathetic/personalized vs. generic).
Explicit description of experimental factors and design in the methods (2 × 2 between-subjects).
Stimuli (chatbot dialogues) were standardized and pretested using a large-language-model (LLM) workflow to ensure consistent experimental stimuli across conditions.
Methods section describing stimuli creation: LLM-generated dialogues were produced and pretested to standardize messages across the 2 × 2 conditions.
Methodological claim: combining fixed-effects panel estimation, mediation analysis, and panel threshold models is an effective multi-method approach to (a) estimate average effects, (b) unpack causal channels, and (c) detect nonlinear stage-dependent impacts.
The paper's applied methodology: fixed-effects panel regressions, mediation framework, and panel threshold modeling on the 2012–2022 provincial panel.
The paper constructs a multidimensional digitalization index composed of digital infrastructure, digital service capacity, and the digital development environment.
Index construction described in data/methods: composite indicator combining measures of connectivity/broadband (infrastructure), e-commerce/digital finance (service capacity), and policy/institutional/human capital indicators (development environment).
The regional average minimum cost of salaried labor (MCSL) was 43.1% of GDP per worker in 2023.
Computed for the same 19-country sample (baseline 2023) using country statutory employer obligations and reporting MCSL relative to GDP per worker following the updated IDB approach.
The regional average non-wage cost of salaried labor (NWC) in Latin America and the Caribbean was 51.1% of formal wages in 2023.
Calculated for a sample of 19 Latin American and Caribbean countries for baseline year 2023 by compiling country-specific statutory employer obligations (payroll taxes, social contributions, mandated benefits, severance, etc.) and expressing employer non-wage costs relative to formal wages using the updated IDB methodology.
Limitations of the review include the small sample of studies, uneven geographic coverage, heterogeneity in methods across studies, and limited long‑run evidence (especially on generative AI), which complicate causal aggregation.
Author-reported limitations based on the meta-assessment of the 17 included studies (variation in methods, contexts, and time horizons).
Design of this work: a systematic literature review and meta‑synthesis of empirical findings from peer‑reviewed journals (2020–2025), based on 17 publications.
Stated methods and inclusion criteria of the paper: systematic review of peer‑reviewed literature (sample = 17).
Long-term evidence on generative AI’s structural labor‑market effects is scarce; few longitudinal studies exist.
Assessment of study horizons and methods among the 17 papers indicates limited long-run and longitudinal analyses specifically on generative AI impacts.
Empirical coverage is limited for low‑income countries; evidence from such settings is scarce.
Geographic distribution of the 17 reviewed studies shows concentration in advanced economies with few or no studies focused on low-income countries.
The literature shows a surge in research activity on AI and labor markets in 2023–2025 and a concentration of studies in advanced economies.
Meta-analytic summary of the publication years and geographic focus among the 17 selected publications (temporal and geographic count of included studies).
Results depend on accurate skill extraction from vacancy texts and valid measures of occupational exposure/complementarity; causal interpretation of diffusion effects may be limited by endogeneity (e.g., technology adoption responding to labor-market conditions).
Authors' stated methodological limitations: reliance on text-analysis identification of skills and on constructed measures of exposure/complementarity; acknowledgement of endogeneity concerns limiting causal claims.
The paper proposes two conceptual models (AI/ML‑Driven Labor Market Transformation Model and Sectoral Impact and Resilience Model) to organize heterogeneous findings and generate testable hypotheses about how AI reshapes labor across sectors and skill levels.
Conceptual synthesis integrating Technological Determinism, Socio‑Technical Systems Theory (STS), and Skill‑Biased Technological Change (SBTC); the models are theoretical outputs of the review used to map mechanisms and heterogeneity rather than empirical findings.
There are substantial measurement and identification gaps in the literature: heterogeneity in measuring 'AI adoption', limited long‑run causal evidence, and geographic bias toward advanced economies.
Methodological assessment within the review noting variability across studies in AI measures (patents, investment, task exposure proxies), paucity of long‑run causal designs, and concentration of empirical studies in advanced economies; this is a meta‑evidence limitation statement.
The study maps employment channels for AI-competent graduates and documents the most frequent job titles/roles and associated wage levels.
Descriptive analysis of employer channels, occupational role frequencies, and wage data compiled in the monitoring dataset covering graduates and alternative-route entrants.
Quasi-experimental designs (difference-in-differences, instrumental variables, event studies) and panel regressions are useful methods for identifying causal effects of AI adoption where plausibly exogenous variation exists.
Methodological summary in the paper listing common empirical strategies used in the literature to estimate causal impacts of technology adoption.
Current research is limited by measurement challenges in capturing AI capabilities and firm-level adoption, and by a lack of longitudinal worker-firm data and causal identification in many settings.
Explicit limitations noted by the paper: gaps in task measures, scarce longitudinal linked datasets, and methodological challenges in causal inference.
This paper's approach is qualitative and based on secondary literature synthesis; it does not collect primary survey, experimental, or administrative data.
Explicit statement in the Data & Methods section of the paper.
Key empirical gaps remain: better measurement of K_T (AI/software capital), more granular matched employer‑employee and wealth data, and improved estimates of task-substitution elasticities are required to precisely quantify incidence and policy impacts.
Authors’ stated research agenda and limitations section, including sensitivity analyses showing outcome variation with parameter choices and measurement uncertainty.
The study points to the need for longitudinal, experimental, or platform-log-based designs to establish causality and measure heterogeneity across platforms.
Authors' methodological recommendations and proposed empirical agenda built on limitations of their cross-sectional survey (N = 450) and literature gaps.
Policy and practice interventions (media literacy, platform design changes, mandated diversity, etc.) are recommended to increase informational diversity and mitigate polarization.
Policy recommendations derived from study findings and literature discussion; not evaluated experimentally in the paper (authors propose interventions as implications).
Algorithmic recommendation (structural) and user selective consumption (behavioural) jointly reinforce ideological positions in digital spaces.
Interpretation based on observed associations between selective exposure and polarization plus reported heterogeneity in perceived algorithmic influence from the N = 450 survey; authors frame results as indicating interacting structural and behavioural mechanisms.
Higher levels of selective exposure are positively associated with increased ideological polarization.
Correlational analyses (reported associations / regression-style tests) using survey measures of selective exposure and measures of opinion/political polarization in the same cross-sectional sample (N = 450).
A large majority of respondents reported frequent exposure to content aligned with their preexisting views (widespread echo chambers / filter bubbles).
Quantitative cross-sectional survey of N = 450 active social media users; self-reported measures of content consumption and indicators of selective exposure; descriptive statistics showing most respondents frequently encounter ideologically consonant content.
An AI agent given revealed-preference data predicts subjects' choices more accurately than an AI agent given stated-preference prompts.
Online experiment in which subjects provided written instructions (prompts) and revealed preferences via choices in a series of binary lottery questions; AI agents were given either the revealed-preference data or the stated-preference prompts and their prediction accuracy on subjects' choices was compared.
Under economy-wide deployment, the share of computer-vision-exposed labor compensation that is cost-effectively automatable rises sharply (relative to the firm-level 11% estimate).
Model counterfactuals or calibration scenarios comparing firm-level deployment vs economy-wide deployment; qualitative statement that share increases substantially.
At the firm level, cost-effective automation captures approximately 11% of computer-vision-exposed labor compensation.
Calibration and implementation in computer vision; reported firm-level estimate from the framework.
Scale of deployment is a key determinant: AI-as-a-Service and AI agents spread fixed costs across users, sharply expanding economically viable tasks.
Modeling and calibration arguments showing fixed-cost spreading effects increase set of tasks for which automation is cost-effective; qualitative and quantitative comparisons in implementation.
Because higher accuracy is disproportionately costly (convex cost), full automation is often not cost-minimizing; partial automation, where firms retain human workers for residual tasks, frequently emerges as the equilibrium.
Theoretical model combined with calibration (scaling laws + task mappings); equilibrium outcomes reported from the framework implementation.
We model automation intensity as a continuous choice in which firms minimize costs by selecting an AI accuracy level, from no automation through partial human-AI collaboration to full automation.
The paper develops a theoretical framework / model that treats automation intensity as a continuous decision variable; described as the central modeling approach.
The findings demonstrate that technological innovation strategies, when effectively implemented, provide measurable competitive advantages for banks and offer evidence-based insights for policymakers and practitioners.
Authors' interpretation/conclusion drawing on the reported statistically significant relationships between innovation (product and technological) and competitiveness.
Technological innovation is positively and statistically significantly related to bank competitiveness (simple linear regression result reported).
Simple linear regression reported in the paper testing the hypothesis that technological innovation influences competitiveness; data collected from innovation-focused executives across licensed banks (paper states data from 39 licensed banks).
Product innovation strategy has a positive and statistically significant effect on competitiveness (F(1,134) = 74.983, p < .001).
Bivariate regression analysis reported in the paper with F(1,134)=74.983, p < .001; based on survey data from innovation-focused executives (regression degrees of freedom indicate n≈136 observations).
Hukum diharapkan tidak hanya berfungsi sebagai alat perlindungan, tetapi juga sebagai instrumen strategis dalam mengelola transisi menuju masa depan kerja yang lebih inklusif, adil, dan berkelanjutan di era kecerdasan buatan.
Kesimpulan dan rekomendasi normatif penulis berdasarkan analisis perundang-undangan dan literatur yang dikaji.
Pengakuan 'hak atas pengembangan keterampilan berkelanjutan' (right to lifelong learning) penting dan perlu dimasukkan sebagai bagian integral dari perlindungan pekerja di era digital.
Klaim normatif dan rekomendasi kebijakan yang muncul dari studi konseptual dan tinjauan literatur komparatif.
Diperlukan reformasi hukum yang lebih progresif dan adaptif, termasuk penguatan sistem jaminan sosial dan pembaruan kebijakan fiskal untuk menangani dampak AI.
Rekomendasi kebijakan yang disimpulkan dari analisis normatif dan komparatif serta tinjauan literatur dalam penelitian.
Diperlukan dasar hukum bagi penerapan model kompensasi inovatif seperti Universal Basic Income (UBI), pajak otomasi, dan skema distribusi manfaat produktivitas AI.
Rekomendasi kebijakan hasil analisis normatif dan komparatif yang dikemukakan penulis berdasarkan tinjauan literatur.
In the user study, AI-expanded 5W3H prompts increase user satisfaction from 3.16 to 4.04.
Reported pre/post or baseline vs AI-expanded satisfaction scores in the N=50 user study with numeric scores 3.16 and 4.04.
In the user study, AI-expanded 5W3H prompts reduce interaction rounds by 60 percent.
Reported comparison in the N=50 user study between baseline interaction rounds and rounds after AI-assisted 5W3H expansion; percentage reduction reported as 60%.