Evidence (1920 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Skills Training
Remove filter
Net value from generative AI is contingent: gains are largest where breadth of ideas and rapid iteration matter, and smaller or riskier where deep domain expertise, tacit knowledge, or high-stakes judgments are required.
Synthesis of heterogeneous empirical results showing task-dependent benefits; argument grounded in observed differences across lab and field contexts and documented limitations in domain-specific performance.
Data-driven HRM reinforces skill-biased technological change: routine HR tasks are being substituted by automation while demand rises for analytical and interpersonal skills.
Theoretical implication and synthesis across studies in the review noting automation of routine tasks and increased demand for analytic/interpersonal skills.
Blockchain and decentralized fintech tools could increase transparency and access to alternative assets for women, but practical adoption barriers remain.
Qualitative assessment of blockchain capabilities and uptake surveys / case studies cited in the article (product analyses and early adoption data; no large‑scale causal evidence).
AI-enabled macro and fiscal models can improve policy testing and contingency planning but require transparency, validation, and safeguards against overreliance.
Conceptual argument and illustrative examples; no empirical trials or model performance metrics reported.
AI shifts the locus of economic governance from static rules to living systems that anticipate shocks and adapt in real time.
Policy-analytic framing and scenario-based reasoning within the book; supported by illustrative examples rather than empirical measurement.
International spillovers of AI-driven productivity depend on trade linkages and cross-border data flows; they are weaker when such linkages are limited.
Cross-country comparisons using trade flow data and measures of cross-border data policy/infrastructure; heterogeneous treatment effects in firm-level panels and country aggregates conditional on trade openness and data flow indices.
Emerging and low- and middle-income economies show smaller productivity gains (roughly 2–6%) and larger short-run job losses in routine occupations after AI adoption.
Estimates from worker-level microdata and firm panels in emerging economy samples, event studies of employment by occupation, and occupational task classification (ISCO/ISCO-08) to identify routine jobs.
Automation reshapes job tasks — reducing demand for some routine manual roles while increasing demand for technical, supervisory, logistics-planning, and service roles — implying substantial reskilling needs rather than outright net job collapse.
Labor-market analysis using occupational employment and job-posting data (task content), supplemented by qualitative interviews and surveys tracing task changes and reskilling needs; scenario sensitivity checks on net employment under alternative adoption paths.
Broader conclusion: AI has the potential to raise productivity and create value, but without proactive policy the benefits risk being concentrated among skilled workers and firms, exacerbating inequality and regional disparities.
Integrative interpretation drawing on productivity and distributional findings from the 17 studies and theoretical considerations about differential complementarities and adoption patterns.
Whether AI is net job‑creating depends on context (sector, country, policy environment, and workforce skill composition).
Observed heterogeneity across the 17 studies by sectoral setting, country context, and policy environment; studies report differing net employment outcomes depending on these factors.
AI contributes to labor‑market polarization: growth in high‑skill opportunities alongside contraction in many middle- and low‑skill roles.
Comparative synthesis of occupational and wage-composition findings across the 17 studies shows recurring patterns of expansion at the high-skill end and reductions in middle/low-skill employment.
Cross-country variation in demand versus supply of new skills is large, and this variation is captured by a Skill Imbalance Index.
Construction of a Skill Imbalance Index at the country level that compares skill demand (vacancies requesting new skills) to proxies for skill supply (worker skill endowments or related measures); country-level comparisons show wide variation in the index.
Labor-market polarization intensifies: gains are concentrated among high-skilled workers.
Occupation-level analyses of employment and wage changes showing larger positive effects for high-skilled occupations following adoption of new skills.
Overall employment and wages rise where new skills are adopted, but these gains are uneven across workers and occupations.
Cross-sectional and panel analyses relating diffusion of new skills (measured from vacancies) to changes in employment and wages across occupations and demographic groups.
Expected differential wage pressure: wages are likely to fall for routine/low‑skill occupations and rise or remain stable for high‑skill workers who possess complementary AI skills.
Econometric studies summarized in the review (cross‑sectional and panel regressions) and theoretical consistency with SBTC; the review highlights heterogeneity in findings and limited long‑run causal certainty.
AI contributes to skills polarization: demand rises for advanced cognitive, digital, and socio‑emotional skills while routine cognitive and manual task demand declines.
Theoretical integration (SBTC), task decomposition studies showing shifts in task demand by skill content, and labour‑market analyses reporting changes in occupational skill mixes; evidence comes from cross‑sectional and panel studies summarized in the review.
AI/ML has a dual, sector- and skill-dependent effect on labor: widespread displacement of routine and lower-skilled tasks coexists with augmentation of professional and cognitive work and the creation of new labor forms (gig, platform-mediated, and human–AI hybrid roles).
Systematic synthesis of peer‑reviewed empirical studies, industry and policy reports, task‑based analyses, and firm/establishment case studies across cross‑country and sectoral analyses; empirical approaches include econometric (cross‑sectional and panel) studies linking automation/AI adoption to employment and wages, task decomposition analyses, and surveys of firm adoption and restructuring. The review notes heterogeneity across studies and limited long‑run causal evidence.
The paper presents hypothesis tests assessing whether university status (and Alliance ranking) and the presence of specialized AI programs affect graduate employment effectiveness, and reports identification of key/high-performing universities.
Statement of empirical approach: hypothesis testing on effects of university status/Alliance ranking and specialized programs using the monitoring dataset; results and significance levels are reported in the full article.
Heterogeneity across universities implies that targeting high-performing institutions and diffusing their practices could be more effective than uniform expansion of AI training.
Observed variation in employment effectiveness, placement outcomes, and wages across the 191 universities; policy implication drawn from comparative performance patterns.
Labor market institutions (unions, collective bargaining), education and training systems, social safety nets, and regulations substantially mediate distributional and aggregate outcomes of AI adoption.
Comparative institutional analysis and equilibrium models linking institutional settings to wage-setting and reallocation dynamics, supported by empirical cross-jurisdiction comparisons where available.
Developing economies face different trade-offs from AI adoption than advanced economies, due to different occupational structures and complementarities.
Comparative analyses and sectoral studies drawing on cross-country microdata and institutional comparisons; theoretical models highlighting differences in task composition and absorptive capacity.
Occupational reallocation occurs: declines in some routine occupations alongside growth in AI-complementary roles (e.g., AI maintenance, oversight, and creative tasks).
Administrative and household employment data analyzed with occupational breakdowns, supplemented by task-mapping methods and panel/event-study approaches documenting shifting occupational shares over time.
Lower-skill roles experience mixed outcomes: some see adverse effects from automation while others benefit where AI is complementary to their tasks.
Microdata analyses and case studies showing heterogeneous effects by task complementarity; task-based exposure measures that differentiate which low-skill tasks are automatable versus augmentable.
AI contributes to wage polarization: earnings grow at the top of the distribution and stagnate or fall for middle occupations.
Wage distribution decompositions and panel regression studies that examine percentile-level wage changes, combined with task-based exposure measures linking AI adoption to differential impacts across the wage distribution.
The employment impact of automation depends crucially on labour-market structure (formal vs informal), availability of alternative employment, and social protections.
Theoretical framing supported by secondary literature comparing institutional contexts and their mediating effects on automation outcomes; no primary causal estimates in this paper.
Standard policy responses focused on retraining and active labor-market programs are necessary but insufficient to fully offset structural job losses where K_T substitutes broadly for tasks.
Model simulations and policy experiments in the calibrated dynamic model comparing scenarios with aggressive retraining versus structural fiscal/interventionist reforms; discussion of empirical limits from case studies and historical reskilling outcomes.
The economic inevitability of technological transformation (in agentic finance) and the critical urgency of proactive intervention.
Author claim synthesizing the paper's argument and modeling results (normative conclusion based on earlier analysis and assertions, not a validated empirical finding).
AI raises managerial cognitive complexity and creates recurring tensions between algorithmic optimisation and systemic, ethical reasoning.
Theoretical synthesis highlighting emergent tensions from integrating computational optimisation with systems thinking and ethical considerations; conceptual, no empirical tests.
Underprovision of verification is likely if left to market forces because information quality has positive externalities and misinformation imposes negative externalities, justifying public funding, subsidies, or regulation.
Economic reasoning and policy implications drawn from the study's findings and the literature on public goods/externalities.
Censorship, restricted data flows, and government interference fragment markets, limit economies of scale, and favor well-resourced, internationally connected actors—widening capacity gaps.
Interpretive economic analysis grounded in observed access constraints and comparative case material across the three platforms.
Limited data access and censorship reduce the efficacy of AI tools by creating training and validation gaps; legal risks complicate use of proprietary platforms and cloud services.
Interviews describing constraints on data availability and legal/operational barriers to using some platforms and cloud services; interpretive analysis of implications for AI training/validation.
Generative AI increases the volume and sophistication of misinformation (deepfakes, fabricated documents), raises false-positive risks, and can be weaponized by state or nonstate actors.
Interview accounts and qualitative analysis noting observed or anticipated misuse of generative models and associated verification challenges.
Resource constraints—limited staff time, funding, and technical capacity—are recurring operational challenges for these platforms.
Staff and stakeholder interviews plus analysis of organizational reports indicating staffing, funding, and technical limitations.
Platforms experience difficulty building and retaining audience trust and engagement, especially in contexts of high public skepticism or polarization.
Interview data from platform staff describing audience engagement challenges, supported by analysis of audience-focused platform formats and community-reporting strategies.
Platforms face limited or asymmetric access to primary data sources such as platform APIs, state data, and archives.
Interview accounts and document analysis noting restricted API access and barriers to state-held data and archives across the three cases.
Censorship and legal risks constrain reporting and distribution for these fact-checking platforms.
Consistent reports from interview subjects and corroborating document analysis indicating legal/censorship-related limitations on publishing and distribution.
Political instability, legal pressure, and censorship strongly shape what platforms can investigate, publish, and access in the region.
Thematic findings from semi-structured interviews with platform staff and document analysis of public reports and policy statements across the three country cases.
AI adoption is skill-biased and spatially uneven, increasing risks of labor-market exclusion among low-educated, middle-aged workers in high-AI regions.
Inference from observed negative associations between AI-rich regions and employment intention for low-educated respondents in the survey of 889; supported by region-level AI adoption proxies used in regressions.
Regional heterogeneity: eastern and northern areas with greater AI penetration intensify displacement pressure on low-skilled, pre-retirement workers.
Subsample/interaction results in the regression analysis separating regions (Beijing, Guangzhou, Lanzhou and broader eastern/northern regional classification) and linking regional AI penetration proxies to employment intention outcomes among low-skilled workers.
Low-educated workers—especially in eastern and northern regions with greater AI adoption—experience increased displacement pressure and lower employment intent.
Interaction/heterogeneity analysis from multivariate regressions on the sample of 889 respondents, using region-level AI adoption intensity (proxied by region) to identify differential associations by education level; stronger negative associations for low-educated respondents in eastern and northern areas.
Higher household economic pressure is negatively associated with willingness to remain employed pre-retirement.
Regression controls included household economic pressure measured in the cross-sectional survey (n=889); coefficient on economic pressure indicated a negative association with employment intention.
Capabilities and data advantages for certain vendors could lead to market concentration and platform dominance in AI-driven educational feedback.
Expert concern synthesized from the workshop of 50 scholars about market dynamics; theoretical warning without empirical market-structure analysis in the report.
Differential access to high-quality AI feedback systems and bias in training data can exacerbate educational inequalities and harm marginalized groups.
Expert consensus and thematic analysis from the 50-scholar workshop, raising equity and bias risks; no empirical subgroup effectiveness estimates included.
Learners may over-rely on AI feedback or game systems to obtain desirable responses, reducing effortful learning.
Workshop participant concerns synthesized qualitatively; cited as risk and an open empirical question—no experimental data provided.
Top-performing community submissions (including baselines and competition entries) still leave a performance gap relative to elite human play on battling tasks.
Paper reports comparative evaluation results showing win-rate and other metrics for heuristic, RL, LLM baselines and community submissions versus human (elite) benchmarks; analysis highlights a remaining gap.
Attribution (labeling responses as AI) can alter perceived empathy and therefore matters for product design, branding, and disclosure policy decisions.
Findings from the attribution effect experiment showing reduced feelings of being heard/validated when replies are labeled AI despite identical content; authors discuss implications for product design and disclosure.
Distributional impacts of AI are uneven: younger workers and individuals with lower formal education face greater disruption.
Descriptive breakdowns of occupational vulnerability and employment changes by demographic groups (age and education) derived from labor statistics and vulnerability mapping; supported by qualitative case observations. Exact subgroup sample sizes not given.
Routine service and administrative occupations show the highest vulnerability to automation and displacement from AI.
Occupational vulnerability mapping using task/routine exposure methods and descriptive employment trend analysis across occupations; supported by employer survey responses and case-study observations. Sample sizes for surveys/mapping not provided in summary.
Manufacturing and Retail experienced net employment contractions attributable mainly to task automation and substitution.
Simulated employment-level series and net change calculations by sector (Manufacturing, Retail) across 2020–2024 in the paper's dataset, together with literature-derived mechanisms emphasizing automation/substitution in these sectors (systematic review of selected publishers 2020–2024).
Explainability, trust, and demonstrated real-world effectiveness are key demand-side frictions; small-scale laboratory gains rarely translate into broad clinical uptake without workflow fit.
Adoption studies, qualitative interviews with clinicians and purchasers, and observations that many high-performing lab models see limited clinical use due to workflow and trust issues.