Evidence (2608 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Skills Training
Remove filter
We propose an AI Impact Matrix that positions skills into four quadrants: High Displacement Risk, Upskilling Required, AI-Augmented, and Lower Displacement Risk.
Conceptual/interpretive framework introduced by the authors; described in text as proposed by the paper.
Using a strictly algorithmic baseline (mathematical bottleneck aggregation), we calculate Relative Occupational Automation Indices (OAI) for the U.S. labor market based on the DWA-level scores.
Method and calculation claim: algorithmic baseline aggregation applied across the 923 occupations / 2,087 DWAs to produce OAIs mapped to the U.S. labor market. Specific aggregation formula referenced but not numerically detailed in the excerpt.
We deconstructed 923 occupations into 2,087 Detailed Work Activities (DWAs).
Explicit data processing claim in the paper: mapping of 923 occupations to 2,087 DWAs for analysis.
Through a thematic review of existing research, the authors identified recurring themes about incentive schemes: their components, how researchers manipulate them, and their impact on research outcomes.
Authors' stated method and findings: thematic review (the scope/number of reviewed papers not specified in excerpt).
A critical aspect of conducting human–AI decision-making studies is the role of participants, often recruited through crowdsourcing platforms.
Claim based on the authors' thematic literature review noting participant sourcing practices (specific studies and counts not given in excerpt).
Researchers conduct empirical studies investigating how humans use AI assistance for decision-making and how this collaboration impacts results.
Statement summarizing the research landscape; supported implicitly by the authors' thematic review of existing empirical studies (number of studies not specified in excerpt).
Returns to AI are heterogeneous across firms; estimating treatment effects requires attention to selection, complementarities, and dynamic adoption pipelines.
Methodological argument referencing treatment-effect literature and observed firm heterogeneity; supported by conceptual examples rather than a single empirical treatment-effect estimate.
The study used a qualitative interpretivist research design drawing on semistructured interviews with 28 managers and professionals from 12 organizations across technology, finance and knowledge-intensive service sectors in Europe and Asia, using thematic and interpretive analysis supported by organizational document review.
Methodology statement from the paper (explicit description of sample, sectors, regions and analytic approach).
AI should be conceptualized as a co-evolving organizational capability rather than a deterministic technology.
Argument developed from interpretive analysis of interview data (n=28), literature engagement and organizational document review.
The study develops an emergent framework of AI–human co-adaptation comprising three interrelated dimensions: technological alignment, cognitive calibration and ethical anchoring.
Framework derived from thematic/interpretive analysis of interview data (n=28) and supporting organizational documents.
The paper introduces the concept of 'augmented work agency' as a multi-level, interpretive form of human agency in algorithmically mediated environments.
Conceptual development within the paper grounded in literature review and qualitative interview data (28 participants) and organizational document review.
The study includes Natural Language Processing (NLP) analysis of 5 million consumer contacts.
Methodological statement in the paper specifying the NLP data volume.
The study includes surveys of 800 marketers.
Methodological statement in the paper specifying the survey sample size.
The study includes AI adoption audits from 120 organizations.
Methodological statement in the paper specifying the audits sample size.
LLM-generated solutions contain roughly the same number of ideas as participant-generated solutions.
Comparative analysis of idea counts within solutions reported in the paper; phrased as 'roughly the same number of ideas' (no numeric effect size provided in the abstract).
The findings are consolidated via the AI Engineering Integration Framework and the Skills Transition Risk Matrix, which provide guidelines for strategically harnessing AI while safeguarding the Engineering profession.
Paper reports development of two conceptual/practical tools (framework and matrix) as outputs of the study; no validation details provided in abstract.
Case studies were performed covering five major industries.
Paper's reported methodology (number of case studies stated in abstract).
A Delphi study was conducted with 40 global experts.
Paper's reported methodology (Delphi sample explicitly stated in abstract).
A comprehensive mixed-methods study was conducted, incorporating a survey of 320 organizations.
Paper's reported methodology (survey sample explicitly stated in abstract).
Persistent data gaps—especially concerning worker-level outcomes, informal labor, and non-Anglophone markets—warrant urgent research investment.
Authors' assessment based on scope of included studies and acknowledged limitations in observation windows and geographic/labor-form coverage.
Following PRISMA 2020 guidelines, we systematically searched six academic databases (Scopus, Web of Science, EconLit, SSRN, IEEE Xplore, Google Scholar) for empirical studies documenting observed—not predicted—labor market changes since 2020; from 1,847 initial records, 94 studies meeting inclusion criteria were retained for qualitative synthesis and 42 for quantitative data extraction.
Methods: systematic literature search following PRISMA 2020 across six named databases; initial records = 1,847; retained = 94 for qualitative synthesis, 42 for quantitative extraction.
We thematically analysed twelve semi-structured interviews with SME owners and managers conducted in early 2025 using Atlas.ti, yielding 19 codes grouped into six categories.
Methods statement in the paper describing qualitative sample and analysis procedures.
We examine the interplay between AI adoption, social capital formation, workforce dynamics, and sustainable development in Eastern Macedonia and Thrace (EMT), one of the EU's least developed regions.
Study context and scope as stated in the paper; empirical work conducted in EMT.
Research has concentrated on advanced urban economies, leaving the implications of AI for peripheral small and medium-sized enterprises (SMEs) operating under weak human capital, thin digital infrastructure, and constrained social capital — underexplored.
Statement in the paper contrasting existing research focus (advanced urban economies) with a lack of attention to peripheral SMEs; no empirical sample size for this bibliographic claim reported in the excerpt.
AI-assisted decision-making paradigms do not have a significant direct effect on task performance.
Experimental study of 59 pre-service teachers using a two-factor mixed design (between-subjects: AI-assisted decision-making paradigms; within-subjects: human-AI consistency). Data analyzed with Bayesian cumulative link mixed model and structural equation modeling; authors report no significant direct effect.
The model is not designed to forecast labour market outcomes or to conduct counterfactual tests.
Explicit methodological limitation stated in the abstract regarding scope of the simulation/model.
Using data from the Occupational Information Network (O*NET), integrated with two exposure measures—routine task automation and AI-driven cognitive automation—we simulate how the removal of 332 tasks alters skill requirements across 736 occupations.
Simulation study using O*NET data combined with two task-exposure measures (routine task automation and AI-driven cognitive automation); simulated removal of 332 tasks affecting 736 occupations (method described in abstract).
In educational settings, the use of GenAI does not consistently translate into improved learning or skill development, highlighting the need for careful integration of GenAI into computer science education.
Meta-analytic finding of a non-significant pooled effect on learning (g = 0.14, 95% CI [-0.18, 0.47]) combined with interpretation and recommendation in the paper's discussion.
Risk of bias in included studies was assessed using RoB2 and ROBINS-I.
Methods statement in the paper indicating the use of RoB2 (for randomized studies) and ROBINS-I (for non-randomized studies) to evaluate risk of bias.
Studies were required to compare GenAI-assisted with unassisted programming using quantitative measures of productivity (task completion time, commits, lines of code) and learning (exam performance).
Inclusion criteria reported in the Methods section of the paper specifying the required comparators and quantitative outcomes.
GenAI assistance has no statistically significant effect on learning outcomes (Hedges' g = 0.14, 95% CI [-0.18, 0.47]).
Meta-analysis pooling studies that reported learning outcomes (exam performance) and computing a pooled Hedges' g with 95% CI; reported estimate crosses zero.
We conducted a meta-analysis of n = 23 studies reporting k = 27 effect sizes on GenAI-powered coding assistants.
Systematic literature search across ACM, arXiv, Scopus, and Web of Science for studies published between 2019 and 2025; inclusion criteria required comparison of GenAI-assisted vs unassisted programming and quantitative outcomes. The paper reports n=23 studies and k=27 effect sizes.
The analysis uses over 23 million WIOA participation records from 2017–2023.
Statement in the paper about the data coverage: administrative records of WIOA participants totaling >23 million records across 2017–2023.
The paper introduces the 'Retrainability Index' to measure program outcomes using post-intervention wage recovery and shifts in Routine Task Intensity (RTI).
Methodological contribution described in the paper: formulation of a composite index (Retrainability Index) combining wage recovery and occupation RTI change to evaluate WIOA outcomes.
The review uses a collection of qualitative and quantitative approaches (i.e., it synthesizes both qualitative and quantitative studies).
Explicit methodological description in the abstract indicating mixed-methods literature synthesis.
A collection of qualitative and quantitative approaches reveals predictors of technological integration, including organisational preparedness, economic factors, policies, and human capital.
Statement about the review's synthesized findings from multiple qualitative and quantitative studies identifying these predictors; method = mixed-methods literature synthesis.
The primary technologies covered in this review are Electronic Health Records (EHR), telemedicine, artificial intelligence (AI), and the Internet of Things (IoT).
Explicit topical scope statement in the paper (description of review subjects); based on the paper's own selection of topics for review.
There is little empirical exploration of how professionals making high-stakes decisions perceive their agency and level of control when working with genAI systems.
Statement about a gap in the existing literature made by the authors (literature review / framing); no sample size (gap claim).
At this stage, AI adoption in Israel does not result in widespread layoffs; its primary impact lies in restructuring the labor market through a slowdown in recruitment, changes in job composition, and the emergence of new AI-related roles.
Empirical claim reported in the paper; the excerpt does not specify datasets, time periods, or sample sizes supporting this observation.
There is no decrease in coding skills among new hires associated with GHC adoption.
Comparison of coding-skill indicators on LinkedIn profiles for new hires at GHC-adopting firms versus non-adopting firms; finding of no measurable decline in coding-skill measures.
Die Studie basiert auf einer wiederholten Querschnittsbefragung lizenzierter Beschäftigter einer außeruniversitären Forschungseinrichtung.
Autorenangabe im Abstract: wiederholte Querschnittsbefragung (survey) unter lizenzieren Beschäftigten der untersuchten Forschungseinrichtung; methodische Beschreibung im Abstract.
The paper proposes a conceptual framework linking AI adoption to employability and role transformation, mediated by skill adaptation, continuous learning, and organizational readiness.
Author-proposed conceptual framework presented in the review paper (theoretical linkage based on literature synthesis).
Future research should strengthen cross-national comparisons, longitudinal tracking, and interdisciplinary collaboration to support development of a technology governance framework that balances efficiency with equity.
Author recommendation based on identified research gaps in the literature review (prescriptive/recommendation).
Existing research has clear gaps: limited evidence from developing-country contexts, insufficient attention to within-occupation heterogeneity, incomplete accounts of psychological mechanisms underlying AI anxiety, and a shortage of rigorous evaluations of reskilling policy effectiveness.
Author's assessment based on the reviewed literature identifying thematic gaps and methodological limitations (critical literature review).
The paper synthesizes sector-specific insights across manufacturing, information technology, healthcare, and finance to examine AI's influence on task automation, job augmentation, and skill requirements.
Descriptive claim about the scope of the review (sectors named in the abstract); no breakdown of sectoral evidence or counts provided in the abstract.
There is a lack of comparative sectoral assessments and standardized risk evaluation frameworks in the literature.
Identified research gap reported by the authors from their systematic review (no counts or formal gap-analysis metrics provided in the abstract).
A structured methodology (systematic review) was adopted to identify literature on AI-driven job transformation and associated employment risks using major academic databases.
Methodological statement in the paper claiming a systematic review approach (specific databases, search terms, inclusion/exclusion criteria and number of studies are not reported in the abstract).
We evaluate structural validity, semantic alignment, reproducibility, and refinement effort to characterize authoring scalability.
Reported evaluation dimensions in the paper; implies empirical assessments were performed along these axes (details not provided in the abstract).
Hierarchical regression analysis and bootstrapping methods were employed for empirical testing.
Methods section explicitly states use of hierarchical regression and bootstrapping for empirical tests on the survey data.
The study used a three-wave longitudinal survey design collecting matched data from 497 employees.
Methods section states a three-wave longitudinal survey and reports matched data from 497 employees.