Evidence (1920 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Skills Training
Remove filter
Education systems, training/reskilling, labor market institutions, industrial policy, and social safety nets mediate the net employment outcomes of AI adoption.
Policy and institutional analysis grounded in labor economics theory; presented as a mediating mechanism in the synthesis rather than demonstrated with empirical causal estimates or sample-based intervention studies.
Knowledge industries exhibit significant complementarities as AI augments cognitive tasks, although some research and analytical roles may be automated.
Theory-based assessment of cognitive-task complementarity and substitution; synthesis rather than empirical occupational-level measurement or causal estimates provided in the paper.
In services, routine service tasks are vulnerable to AI, while high-contact and creative services are less vulnerable; digital platform services are likely to expand.
Task-level sectoral reasoning and qualitative examples in services; no empirical sectoral employment dataset or quantified vulnerability scores reported in the paper.
Manufacturing has strong automation potential but also opportunities in advanced manufacturing and maintenance/engineering roles.
Sector-specific analysis combining task vulnerability to automation with emergence of advanced manufacturing tasks; presented as theoretical/qualitative assessment rather than measured manufacturing employment trajectories from a stated sample.
Distributional effects will include wage polarization (rising returns to high-skill labor and pressure on middle-skill wages) and uneven regional impacts.
Application of SBTC and task-based wage theory to AI adoption; sectoral and regional heterogeneity discussed qualitatively. No new wage-distribution panel or cross-country regression evidence reported in the paper.
Short- to medium-run transitional unemployment, wage polarization, and sector- and country-level heterogeneity are likely.
Temporal-mismatch argument from task-based substitution and SBTC frameworks; sectoral assessment across manufacturing, services, knowledge industries. Evidence is theoretical/synthesized rather than from a stated empirical panel or cross-sectional dataset.
Net employment outcomes depend more on institutions and policy than on technology alone.
Comparative treatment of advanced versus developing economies and policy/institutional analysis; grounded in economic theory rather than primary empirical causal estimates (no sample sizes or identification strategies reported).
AI will substantially restructure labor markets.
Theory-driven sectoral analysis and task-based arguments (synthesis of labor economics frameworks). No primary empirical dataset or quantified cross-country sample reported in the paper.
Knowledge industries exhibit strong complementarities with AI but also face task-level automation (e.g., routine analysis) that changes job content.
Literature synthesis on AI adoption in knowledge sectors and task-based mapping showing both complementarities and partial task substitution.
Services show mixed effects: routine clerical and customer-service tasks are vulnerable, while personalized, creative, and relational services are less so.
Task-level synthesis of service-sector automation exposure studies and conceptual analysis of task complementarities in relational services.
Manufacturing faces high automation potential for routine production tasks but also opportunities in advanced manufacturing and robotics maintenance.
Cross-sectoral analysis and literature on automation in manufacturing; theoretical task mapping indicating routine task exposure and emergence of maintenance/advanced roles.
Wage polarization is likely: middle-skill wages will be compressed while high-skill wages rise; some low-skill service roles may persist or expand.
Synthesis of skill-biased technological change literature and task substitution/complementarity arguments; paper references empirical patterns of polarization in prior studies.
Firms with better data infrastructure and higher initial IT investment will adopt AI faster, potentially widening performance gaps across firms and industries.
Theory-informed assertion and literature synthesis; no empirical heterogeneity analysis is specified in the abstract.
Complementarity between AI and skilled accountants may raise wages for analytical roles while compressing demand for routine clerical roles, contributing to wage polarization.
Prediction grounded in economic theory and prior literature; the paper does not report direct wage-change estimates in the abstract.
AI will automate routine accounting tasks, reducing demand for low-skill bookkeeping work while increasing demand for higher-skilled roles (data interpretation, advising, oversight), creating occupational reallocation and upskilling needs.
Projection based on task-based labor economics literature and the paper's synthesis; not supported by specific longitudinal labor-market estimates in the abstract.
Generative AI can play a bounded, auditable role as multilingual, low‑bandwidth learning support, but must be governed to avoid digital gatekeeping and should be excluded from eligibility screening, risk scoring, or automated decision‑making.
Analytical assessment of AI's potential roles and risks in training delivery; governance prescriptions based on policy and risk reasoning rather than empirical AI evaluations in the corridor.
Proposition 3: Rights‑based effectiveness requires measurable capability outcomes and institutional follow‑through (beyond information transfer).
Normative and governance analysis based on gap mapping and the paper's empirical agenda; not tested with outcome data in this study.
Training can be treated as migration-governance infrastructure that functions simultaneously as a capability intervention (actionable navigation, contract comprehension, safe help‑seeking), a labour‑market signal when aligned with TVET/human-capital planning, and a potential gatekeeping node if access, assessment, and accountability are weak.
Conceptual reframing supported by policy analysis and governance gap mapping; no empirical validation provided in the paper.
The technological-form parameter (η1 vs. η0, i.e., proprietary vs. commodity) can independently flip the model across the inequality-increase/decrease boundary.
Model counterfactuals varying η1 versus η0 show that changing the degree of proprietary control over AI can move the calibrated model from one regime to the other.
At the calibrated baseline, the sign of the change in inequality (ΔGini) is determined mainly by one empirical moment (m6) together with the rent‑sharing elasticity ξ.
Results of the sensitivity decomposition and calibration reported in the paper indicating m6 and ξ primarily drive the sign of ΔGini in the baseline parameterization.
Students use GenAI as a co-designer and idea generator, which modifies workflow, decision points, and evaluative practices in their design process.
Qualitative interview data from architecture students; thematic analysis surfaced accounts of GenAI being used for ideation, variant generation, and as a collaborative partner (N unspecified).
Collaboration between architecture students and generative AI reshapes creative cognition in the architectural design process through algorithmic thinking strategies.
Semi-structured interviews with architecture students (interview sample size not specified) analyzed via inductive thematic analysis; authors synthesize recurring themes linking GenAI use to changes in cognitive strategies.
Heterogeneous program design and outcome measurement limit purchasers' ability to identify high‑value AI education offerings, creating a market opportunity but also risk.
Observed variability in program length, setting, content focus, target audience, and evaluation methods across the 27 included programs as reported in the review.
The predominant focus on entry‑level trainees suggests future workforce increases in basic AI literacy but leaves current mid‑career clinicians undertrained, potentially slowing adoption and creating heterogeneous skill premiums.
Distribution of target audiences and career stages in the 27 programs (56% entry‑to‑practice; many targeted students/early practitioners) and interpretation in the paper about labor market implications.
Compliance costs and audit requirements create regulatory barriers to entry but also incentives for standardized metadata and interoperable systems; policy can encourage open standards to reduce lock-in.
Policy analysis and recommendation in paper (theoretical); no regulatory cost quantification provided.
Algorithmic lesson planning, automated audits, and data-driven competency mapping are natural targets for AI augmentation and can reduce recurring resource burdens but require quality-labelled data, strong governance, and transparency.
Paper's discussion of AI complementarity (conceptual); no implementation trials or performance metrics presented.
The taxonomy clarifies where substitution versus complementarity are likely: AI-assisted tasks imply partial substitution of routine work; AI-augmented applications generate complementarities that increase demand for higher cognitive skills; AI-automated systems shift labor toward monitoring, exception handling, and governance.
Inference from mapping the three interaction levels to observed case features (n=4) and application of the Bolton et al. framework in cross-case synthesis.
AI-augmented systems support real-time medical tasks (e.g., decision support during procedures), amplifying human judgment and speed but raising required cognitive skills and changing training and coordination practices.
Findings from the case(s) labeled AI-augmented in the four-case qualitative sample and cross-case interpretive analysis using the service-innovation framework.
Returns to AI and digital investments are heterogeneous across firms and industries, implying adoption barriers and varied productivity impacts.
Across the 145 studies, reported effect sizes and qualitative findings vary by firm characteristics, industry sector, and technology readiness, as summarized in the review.
Impacts of digital transformation on productivity vary substantially by moderators such as digital competencies, organizational culture, leadership, and technology readiness.
Multiple included studies identified these factors as moderators/mediators in their empirical analyses; moderator effects were synthesized in the review.
Levels of familiarity and use of AI tools vary widely by role, discipline, and region.
Quantitative survey items (Likert-scale, multiple-choice) measuring familiarity and use of AI tools; subgroup comparisons (role, discipline, region) using descriptive statistics; thematic support from open-ended responses.
There are large disparities in AI engagement and preparedness across roles (students vs. educators), academic disciplines, and world regions.
Descriptive statistics from the survey comparing subgroups by role, discipline, and region; sample of >600 respondents; measures include self-reported awareness, familiarity, use, and confidence mapped to UNESCO competency frameworks.
Evidence of labour reallocation within rural economies following AI-driven productivity changes was observed in the reviewed literature.
Reported findings across several reviewed studies noting shifts in labour allocation and task composition on farms and in related value-chain activities.
Paper‑based regulatory environments slow DT diffusion; digitised compliance and standardised data schemas can accelerate adoption and enable AI‑driven oversight.
Findings in the review noting regulatory friction and proposed solutions; supported by case evidence where digitisation of compliance facilitated digital workflows.
DT adoption is a socio‑technical transformation that requires governance, standards, collaborative delivery models, and workforce capability building — not just technology deployment.
Conceptual synthesis and cross‑study recommendations in the reviewed literature emphasizing organizational, contractual, and governance changes alongside technology.
AI transforms learning conditions by enabling on-demand problem-solving help for students.
Review of recent literature on AI tutoring/assistive tools and policy documents describing technology adoption; illustrated in comparative case studies (secondary sources).
Effectiveness of ChatGPT varied by discipline; not all course contexts showed significant gains from allowing its use.
Heterogeneous treatment effects observed across the six courses; GLM and non-parametric tests indicated variation in effect sizes and statistical significance by course/discipline.
AI adoption acts as a site of power reconfiguration: roles, relationships, and accountability structures shift as AI is integrated into workflows.
Qualitative workshop data from 15 UX designers describing anticipated or observed shifts in accountability and role boundaries; cross-scale thematic synthesis.
Discourses of efficiency carry ethical and social dimensions—responsibility, trust, and autonomy become central concerns when tools shift who does what and who is accountable.
Recurring themes from the 15 UX designers' discussions and design choices during workshops; thematic coding emphasized responsibility, trust, autonomy linked to efficiency claims.
At the team scale, adoption triggers negotiations over collaboration patterns, division of responsibility, and maintaining design rigor.
Group workshop activities and discussions among UX designers (n=15) where participants described team negotiation scenarios; team-level themes identified in analysis.
At the individual scale, designers expressed trade-offs among efficiency gains, opportunities for skill development, and feelings of professional value.
Individual- and small-group reflections in the 15-person workshop study; thematic coding highlighted these three recurring themes at the individual level.
Organizations frame AI adoption around competitiveness and efficiency, while workers (UX designers) weigh those efficiency framings against professional worth, learning, and autonomy.
Participants' reports during the qualitative design workshops (n=15) showing differences between organizational rhetoric and worker concerns.
Adoption outcomes depend on interactions among individual, team, and organizational incentives and norms (three analytic scales).
Cross-scale coding and synthesis of workshop data from 15 UX designers; analyses grouped themes into individual, team, and organizational scales.
Designers’ decisions about integrating AI reflect trade-offs between efficiency and social/ethical concerns (skill development, autonomy, accountability).
Workshop prompts and group discussions with 15 UX designers; thematic analysis identified recurring trade-off narratives between efficiency and professional/ethical considerations.
AI adoption reconfigures roles, responsibilities, trust, and power within organizations.
Qualitative data from design workshops with 15 UX designers; participants' reflections and group discussions coded using cross-scale thematic analysis (individual, team, organizational).
AI functions like a capital-augmenting technology that substitutes routine tasks while complementing creative and coordination tasks, altering the capital–labor mix and returns to different human capital types.
Conceptual framing and synthesis of literature and survey impressions; not directly tested empirically in the paper.
AI-driven automation will shift labor demand away from routine coding toward higher-order tasks (architecture, design, systems thinking, tool supervision), consistent with skill-biased technological change.
Theoretical implications drawn from observed substitution of routine tasks in literature and practitioner expectations in the survey; no labor-market causal analysis presented.
Benefits and uptake of AI tools are heterogeneous: they vary by team size, application domain (e.g., safety-critical vs. consumer software), and organizational process maturity.
Subgroup comparisons implied from survey (e.g., by role or domain) and literature examples; explicit subgroup sample sizes and statistical tests not provided in the summary.
AI augments developers rather than fully replacing them for complex, creative tasks; automation mainly substitutes routine work and complements higher-skill activities.
Synthesis of literature and survey responses indicating tool usage patterns and practitioner expectations about role changes; no experimental displacement studies reported.
Effective ISP depends on high-quality internal data and sometimes external data sharing across partners, raising issues around data ownership, incentives to share, and the design of contracting/market mechanisms to internalize coordination gains.
Case evidence on importance of data quality and authors' policy/contractual discussion; conceptual argument informed by interviews about data-sharing frictions.