Evidence (1902 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Skills Training
Remove filter
These findings contribute to the literature by providing empirical insights from a developing economy, where unique socioeconomic and institutional factors shape the impact of AI.
Scope/claim of contribution based on the study's context (Cambodia) and its dataset (n = 351).
This study employed PLS‐SEM analysis on data from 351 respondents, revealing significant workforce reshaping.
PLS-SEM analysis conducted on survey data (n = 351) as reported in the paper.
The rapid adoption of artificial intelligence (AI) is fundamentally transforming labor markets worldwide, presenting both opportunities and challenges.
Statement made in the paper as background/justification; not based on the study's empirical data.
Implementation of human-replacing technologies leads to significant transformations in skill demand: it reduces reliance on low-skilled labour while increasing demand for qualified engineers, system operators and specialists in digital technologies.
Sector-specific analysis and review of international labour-market studies cited in the article documenting skill-biased effects of automation and digitalization; qualitative assessment for Ukraine's mining and metallurgical sector under workforce shortage conditions.
The framework implies threshold effects in training and capability acquisition: when the teaching horizon lies below the prerequisite depth of the target, additional instruction cannot produce successful completion of teaching; once that depth is reached, completion becomes feasible.
Model-derived threshold result described in the abstract (mathematical analysis of prerequisite depth vs. teaching horizon).
The value of information depends on whether downstream users can absorb and act on it: a signal conveys meaning only to a learner with the structural capacity to decode it (an explanation that clarifies a concept for one user may be indistinguishable from noise to another who lacks the relevant prerequisites).
Conceptual argument motivating the model; theoretical reasoning described in the paper's intro/abstract.
Generative AI serves as an effective 'wingman' for employment lawyers, capable of replacing substantial junior associate work while requiring continued human expertise for client counseling, supervision, and final legal advice preparation.
Authors' synthesis of experimental results showing AI-produced substantive analysis plus discussion about remaining limitations (e.g., citation errors) and required human oversight; qualitative assertion about substitutability for junior associate tasks.
Artificial intelligence embedded in human decision-making can either enhance human reasoning or induce excessive cognitive dependence.
Stated as a conceptual claim in the paper's introduction/abstract; supported by the paper's conceptual framing (theoretical argument), no empirical sample or experimental data reported here.
These productivity gains are most pronounced for lower-skilled workers, producing a pattern the authors call “skill compression.”
Cross-study pattern reported in the literature review: comparative evidence across worker-skill strata in multiple empirical papers showing larger relative gains for lower-skilled/junior workers; specific underlying studies and sample sizes are not enumerated in the brief.
The net educational value of AI-generated feedback depends on alignment with pedagogical goals, quality evaluation, integration with human teaching, and governance to manage equity, privacy, and incentives.
Synthesis statement from the meeting report produced by 50 interdisciplinary scholars; conceptual judgment rather than empirical proof.
There are potential measurement gaps in the data, particularly in capturing informal employment and rapid technology diffusion.
Authors' stated limitations noting data coverage issues: official statistics and surveys may not fully capture informal sector dynamics or fast-moving tech adoption. Specific metrics of missingness not provided.
The evidence presented in the study is largely correlational, with limited causal identification of AI causing job changes.
Study design and methods statement: reliance on descriptive analyses, occupation-vulnerability mapping, employer surveys, and case studies without quasi-experimental causal identification strategies.
Net gains from AI are not automatic nor evenly distributed; benefits depend on translation rates to clinical success and on addressing non-technical enablers.
Synthesis and conditional argument informed by sector observations; not backed by empirical distributional analysis in the paper.
Alignment with evolving regulatory expectations (evidence standards, auditing, liability) is necessary to translate AI capabilities into products and reduce adoption risk.
Policy-focused argument referencing regulatory uncertainty; no empirical measures of regulatory impact included.
Realized, sustained impact ('democratized discovery') from AI depends on non-technological enablers: high-quality interoperable data, rigorous validation, transparency/auditability, workforce upskilling, ethical oversight, and regulatory alignment.
Synthesis and prescriptive argument in editorial grounded in observed constraints; no empirical testing of causal dependence provided.
The research methodology combines systemic analysis, comparative assessment of international practices, and analytical generalization of organizational learning models, enabling capture of both structural trends and concrete institutional responses to technological changes.
Methodological statement from the paper describing its approach; this is a factual claim about methods used rather than an empirical finding.
The impact of Generative AI on labor markets is heterogeneous across occupations and tasks.
Synthesis of recent empirical studies drawing on population-level data, online job postings, and systematic reviews as described in the paper.
The study investigates the benefits and drawbacks associated with the incorporation of innovative artificial intelligence technologies into industrial policies.
Author-stated research objective reported in the text; evidence claimed to come from literature review (novel studies and existing literature), but no specific studies, sample sizes, or empirical measures are provided in the excerpt.
The paper constructs three policy-contingent labor market scenarios for 2025–2035: (1) an Augmented Services Economy with inclusive productivity gains, (2) a Dual-Speed Labor Market characterized by polarization and uneven adjustment, and (3) a Disruptive Automation Shock involving significant displacement and social strain.
Prognostic, scenario-based approach integrating the three evidence bases (task-level capability mapping, occupational exposure/complementarity analysis, and firm- and worker-level adoption evidence). The scenarios are developed and described in the paper for the 2025–2035 horizon.
The validity of human–AI decision-making studies hinges on participants' behaviours; effective incentives can potentially affect these behaviours.
Conclusion from the authors' thematic review and theoretical rationale linking incentive design to participant behaviour and study validity (no quantitative effect sizes provided in excerpt).
The study's counterfactual analytical model links HR indicators (training intensity, absenteeism, labor productivity, turnover rates, workforce allocation) to organizational performance outcomes using regression-based simulations and predictive estimation.
Methodological claim explicitly stated: model construction from an industrial firm dataset using regression-based simulations and predictive techniques. (Specific sample size, variable operationalizations, and time frame not reported in the description.)
The review synthesizes findings across five thematic areas: AI‑driven task automation and decision support; digital literacy and capacity building; gender‑sensitive employment patterns; infrastructural and policy challenges; and sustainable development outcomes.
Thematic synthesis of the 55 included articles as described in the paper; themes explicitly listed by the authors.
Effects of curated Skills are highly heterogeneous across domains (e.g., +4.5 pp in Software Engineering vs. +51.9 pp in Healthcare).
Per-domain pass-rate deltas reported in the paper (SkillsBench per-domain analysis). The example domain deltas (+4.5 pp and +51.9 pp) are taken from the reported per-domain results.
Institutional factors (education systems, active labor market policies, mobility, industrial policy, social protection) shape net employment outcomes from AI.
Theoretical and policy-focused synthesis; cross-country comparisons in literature highlight institutional mediation though no single new cross-country empirical estimate is provided.
Net employment effects depend on the balance of substitution and complementarity, sectoral exposure, and institutional responses.
Conceptual labor-economics framework (task-based, skill-biased change) and comparative review of cross-country/sectoral evidence emphasizing institutional mediation.
AI will substantially restructure labor markets.
Task-based theoretical approach and cross-sectoral synthesis of empirical studies showing task substitution and complementarity effects across occupations and sectors.
The pandemic produced a 1.5% increase in people identifying as potential entrepreneurs but a 2.3% contraction in emerging entrepreneurs, indicating a breakdown in converting aspiration into formal entrepreneurial activity (pipeline disruption).
Reported percentage changes in pipeline stages (potential entrepreneurs and emerging entrepreneurs) measured in the survey before/after (or during) the pandemic within the >27,000 respondent sample; comparison of identification and transition rates along the entrepreneurial pipeline.
Whether AI increases or decreases overall inequality depends on AI’s technology structure (proprietary vs. commodity) and on labor-market institutions (rent‑sharing elasticity ξ and asset concentration).
Comparative statics and regime analysis within the calibrated model that varies the technological-form parameter (η1 vs. η0) and the rent‑sharing elasticity ξ, as well as measures of asset concentration.
AI can equalize individual task performance while increasing aggregate inequality because rents accrue to owners of complementary assets rather than to workers.
Analytical model and calibrated simulations demonstrating that within-task compression (reduced worker dispersion) can coexist with rising aggregate inequality (ΔGini) owing to rent concentration at the firm/asset-owner level.
The study's qualitative and exploratory design limits generalizability; the proposed framework requires quantitative testing and broader samples (practicing architects, firms, cross-cultural contexts).
Explicit limitations stated by authors; study is based on semi-structured interviews with architecture students (N unspecified) and inductive thematic analysis.
Participant targeting: 44% of programs targeted doctors and 44% targeted medical students (with possible overlap), and 56% targeted entry‑to‑practice career stages.
Participant audience and career-stage data extracted from the 27 included programs; proportions reported in the review.
Most programs were delivered in academic settings: 56% of evaluated programs reported an academic setting.
Setting information extracted from the 27 included programs, with 56% reported as delivered in academic settings.
A plurality of programs were short in duration: 44% of programs were categorized as short courses.
Extraction of program length from the 27 included studies; 44% were classified as short courses per the review's categorization.
Most programs were introductory in content: 67% of included programs taught introductory AI concepts rather than advanced/technical AI skills.
Program content extraction across the 27 included studies yielded that 67% were classified as teaching introductory AI.
The methodological landscape of the evidence base is heterogeneous, consisting of cross-sectional surveys, case studies, quasi-experimental designs, and a limited number of longitudinal analyses.
Study design information was extracted from the 145 included studies revealing a mix of designs and relatively few longitudinal or experimental studies.
The United States' decentralized education system produces tensions between local innovation and federal accountability, with active debates over data and privacy laws shaping responses to AI in assessment.
Case study of U.S. policy and secondary literature documenting federal-state-local governance dynamics and ongoing legal/policy debates; descriptive evidence from public documents.
China's centralized control enables rapid piloting of AI-supported assessment but raises concerns over surveillance and data governance.
Country case study using Chinese policy texts and secondary analyses describing centralized education governance and data-governance practices; illustrative rather than empirical.
India faces pressure to maintain high-stakes exams amid uneven digital access and is experimenting with blended formative tools.
Country-specific case study based on policy documents and secondary literature describing India's exam system and early technology initiatives; no primary survey/sample size.
Four national case studies (India, China, the United States, Canada) illustrate diverse national responses to AI in assessment shaped by governance structures, resource constraints, cultural attitudes, and political pressures.
Cross-national comparative analysis using publicly available policy texts, recent reforms, and secondary literature for each country; descriptive, illustrative cases rather than exhaustive or representative samples.
Explanations change workflows, shift responsibilities between humans and machines, and can reshape power dynamics—creating both opportunities (better oversight) and risks (over-reliance, gaming).
Qualitative and conceptual studies synthesized in the review, including socio-technical analyses and case studies reporting observed or theorized workflow and responsibility shifts; no meta-analytic causal estimate.
Explanations increase user trust principally when they are understandable, actionable, and aligned with users’ domain knowledge; opaque or overly technical explanations can fail to build trust or even decrease it.
Thematic synthesis of empirical and conceptual studies in the reviewed literature reporting conditional effects of explanation form and comprehensibility on trust; review notes heterogeneity in study designs and contexts.
Explainability improves perceived legitimacy, user trust, and organizational accountability only when technical transparency is paired with human-centered explanation design and governance mechanisms.
Synthesis of studies from the reviewed literature showing conditional effects of algorithmic interpretability combined with explanation design and governance; derived via thematic coding across technical and social-science sources (no new primary experimental data reported).
Explainability is a necessary but not sufficient condition for trustworthy AI in high-stakes domains.
Systematic literature review (thematic coding and synthesis) of interdisciplinary scholarship (peer-reviewed research, technical reports, policy documents); the paper synthesizes conceptual and empirical studies rather than presenting new primary data. Emphasis on high-stakes domains (healthcare, finance, public sector).
The benefits of FDI (jobs, productivity, skills) are uneven and often conditional on institutional quality, labor regulation, and sectoral composition of investments.
Mechanism mapping and thematic synthesis linking heterogeneous empirical findings to contextual moderators (governance, regulation, sector); review emphasizes consistent role of these moderators across studies.
FDI’s effects on employment, wages, and income distribution in Sub‑Saharan Africa are mixed and highly context‑dependent.
Conceptual literature review synthesizing theoretical frameworks and empirical findings across micro, firm, sectoral, and macro studies; no new primary data. Review notes heterogeneous identification strategies and results across studies and contexts.
Data‑driven policies can either amplify or mitigate inequalities depending on data representativeness, model design, and deployment governance.
Multiple empirical examples and theoretical analyses in the review highlighting cases of both harm (bias amplification) and mitigation, identified across the 103 items.
Citizen acceptance, transparency, and perceived fairness strongly shape adoption trajectories and the political feasibility of AI tools in government.
Repeated empirical findings in the reviewed literature linking public trust, transparency measures, and fairness perceptions to successful or failed deployments (drawn from multiple case studies in the 103 items).
Adoption of AI and data-driven governance is highly uneven across jurisdictions and sectors, driven by institutional capacity, governance frameworks, and public trust.
Cross‑regional and cross‑sector comparisons in the review corpus (103 items) showing varying maturity levels and repeated identification of institutional capacity, governance arrangements, and trust factors as determinants.
Productivity gains from generative AI depend on task mix, integration design, and the availability of complementary human skills.
Theoretical evaluation and synthesis of heterogeneous empirical findings; authors highlight variation across firms, sectors, and tasks.
Existing evidence is time-sensitive and heterogeneous: rapidly evolving models, heterogeneous study designs, and many short-term lab/microtask studies limit direct comparability and long-run inference.
Meta-observation from the review: documented methodological limitations across the literature (variation in models, tasks, metrics; prevalence of short-term studies).