Evidence (2608 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Skills Training
Remove filter
A small number of AI corporations have unprecedented power.
Introductory chapter highlights the theme of concentrated corporate power in AI; asserted as an observational claim in the report's framing rather than derived from a presented empirical sample in the introduction.
WIOA is not well-equipped to support large-scale, cross-industry labor transitions.
Low observed incidence of cross-industry occupational transitions and limited shifts into less automation-exposed occupations in the WIOA data (2017-2023) lead authors to conclude the program is poorly suited for large-scale cross-industry reallocation.
A substantial portion of WIOA participants simply return to their prior field after program participation.
Descriptive and outcome analyses on the WIOA participation records (2017-2023) showing many participants re-enter the same occupation/industry rather than transitioning to different occupations.
WIOA rarely shifts workers into less automation-exposed work.
Analysis of WIOA administrative records (2017-2023) using a newly introduced 'Retrainability Index' that decomposes outcomes into post-intervention wage recovery and shifts in routine task intensity (RTI). The paper reports low incidence of downward RTI (movement into less automation-exposed occupations) among participants.
Frontier software engineering agents have saturated short-horizon benchmarks while regressing on the work that constitutes senior engineering: long-horizon, multi-engineer, ambiguous-specification deliverables.
Position asserted in the paper based on literature/benchmark trends and authors' field observations; no original empirical dataset or quantified analysis provided in the paper text excerpt.
Disparities may lead to AI bias and governance challenges that potentially leave the poorest communities excluded from the Fourth Industrial Revolution.
Paper lists AI bias and governance challenges as potential consequences of uneven AI development; presented as conceptual/ethical/political risks without empirical quantification in the excerpt.
These disparities risk causing economic isolation and social inequality.
Qualitative claim in the paper listing potential socio-economic risks of uneven AI adoption; no supporting empirical estimates in the excerpt.
These disparities carry the risk of a deepening digital divide.
Stated as a consequence/risk in the paper; presented qualitatively without empirical quantification in the excerpt.
Projections indicate that without additional measures, these disparities are likely to increase.
Paper reports forward-looking projections or scenario analysis (methods, assumptions, and quantitative projection details not given in the excerpt).
Low-income regions (in particular parts of Africa and South Asia) lag significantly behind in both education and access to digital technologies.
Statement in the paper based on comparative assessment of education levels and digital access across regions; the excerpt provides no numeric data or described sample.
Novices more often experience invisible failures: conversations that appear to end successfully but in fact miss the mark.
Annotation-based comparison in the 27K WildChat transcript sample indicating higher rates of 'invisible' failures (apparent successes that are actually incorrect or insufficient) among novice users.
Fluent users experience more failures than novices.
Quantitative comparison of failure occurrences across user-fluency strata in the 27K annotated transcript sample from WildChat-4.8M.
Workers acquire skills through generative AI tools but lack credible ways to signal or validate these skills in competitive freelance markets (a structural challenge the paper terms 'invisible competencies').
Reported finding and conceptual contribution based on the paper's mixed-methods study (survey + semi-structured interviews).
There is a shift from learning as growth to learning as survival, where upskilling is oriented toward immediate market viability rather than long-term development.
Reported thematic finding from the paper's interviews and survey of freelance knowledge workers.
Freelancers do not treat generative AI as their primary learning resource due to inconsistency, lack of contextual relevance, and verification overhead.
Reported finding from the paper's mixed-methods study (survey + semi-structured interviews with freelance knowledge workers).
Freelance workers must continually acquire new skills to remain competitive in online labor markets, yet they lack the organizational training, mentorship, and infrastructure available to traditional employees.
Framing statement in the paper's introduction / literature review (not reported as an empirical result from this study).
Suppression bias is the systematic suppression of correct-but-difficult recommendations when clinician capability falls below the execution threshold.
Definition and characterization of a proposed failure mode provided in the paper (conceptual/theoretical).
Obstacles exist for healthcare workers in rural areas that limit the benefits of technology.
Review conclusion noting persistent obstacles for rural healthcare workers drawn from the literature; synthesis of qualitative/quantitative sources (no sample size in excerpt).
Indian healthcare faces barriers to technological integration such as financial issues, poor infrastructure, and regulatory problems.
Review-identifed barriers drawn from the literature (qualitative and quantitative studies summarized by the authors); no aggregate sample size reported in the excerpt.
The marginal gains from genAI came at the high cost of recruiter deskilling, a trend that jeopardizes meaningful oversight of decision-making.
Qualitative interview evidence (n=22) where participants described loss of skills/deskilling associated with genAI use and concerns about oversight.
The decision of whether or not to adopt genAI was often outside recruiters' control, with many feeling compelled to adopt due to directives from higher-ups in their business.
Reports from interviewed recruiters (n=22) indicating organizational pressure and top-down calls to integrate AI.
Recruiters believe they have final authority across the recruiting pipeline, but genAI has become an invisible architect shaping the foundational information used for evaluation (e.g., defining a job, determining what counts as a good interview performance).
Qualitative findings from interviews with 22 recruiting professionals describing perceived authority versus the influence of genAI on informational inputs.
GenAI subtly influences control over everyday recruiting workflows and individual hiring decisions.
Qualitative evidence from semi-structured interviews with 22 recruiting professionals (n=22).
The research also identifies policy loopholes and unequal AI preparedness on the continent.
Findings from the paper's systematic review highlighting gaps in policy frameworks and uneven preparedness across Sub‑Saharan African countries; no country‑level counts or indices provided in the summary.
Results indicate rising job displacement, industrial change, and inequality.
Aggregate findings reported from the systematic review pointing to increases in job displacement, structural industrial change, and inequality across studies; no aggregated numerical magnitudes provided in the summary.
They are a threat to semi-and unskilled jobs, particularly in manufacturing.
Conclusion from the systematic review synthesizing studies on automation risk to semi- and unskilled positions, especially in manufacturing; no numerical risk estimate provided in the summary.
Vulnerable populations—including low-skill workers, aging labour forces, and developing economies—are especially affected by AI-driven changes.
Abstract highlights special attention to vulnerable populations in the review and asserts differential impacts; no specific empirical estimates or sample sizes provided in abstract.
AI displaces routine cognitive and manual tasks.
Explicit finding reported in abstract based on the paper's systematic review of empirical studies (no individual study sample sizes or quantitative estimates provided in abstract).
The 2026 Amazon outages illustrate how 'mechanized convergence' (homogenization of code/engineering practices via AI) leads to systemic fragility.
Case study analysis using the 2026 Amazon outages as a single illustrative example; implies qualitative examination of that event.
Recursive training on synthetic code threatens to homogenize the global software reservoir, diminishing the variance required for robust engineering.
Theoretical claim about dataset/model feedback loops; no empirical quantification provided in the text excerpt (argumentative risk assessment).
This epistemological debt erodes the mental models essential for root-cause analysis, widening the gap between system complexity and human comprehension.
Argumentative/theoretical claim supported by reasoning in the paper; no quantified measurement of mental-model erosion reported.
Substituting logical derivation with passive AI verification creates an 'Epistemological Debt' — a hidden carrying cost incurred by engineers.
Theoretical/conceptual assertion within the paper; argued qualitatively rather than demonstrated with controlled empirical data.
The integration of Large Language Models (LLMs) into the software development lifecycle (SDLC) masks a critical socio-technical failure the authors term 'Cognitive-Systemic Collapse.'
Conceptual/theoretical claim presented in the paper's argumentation; no empirical sample or quantitative study reported for this specific naming claim.
There is limited but suggestive early evidence of labor market disruption from AI/LLMs.
Paper summarizes emerging empirical research indicating early signs of disruption; the abstract characterizes the evidence as limited and suggestive without presenting numeric estimates or sample sizes.
Certain occupations face the greatest risk from AI-driven automation (the article examines which occupations are most at risk).
Paper claims to examine occupation-level risk using synthesized empirical studies; the abstract does not list which occupations or quantitative risk estimates.
There is a gap between theoretical automation potential and observed real-world implementation of AI/LLMs.
Synthesis of recent empirical studies that compare task-level exposure metrics with employment and usage data; no specific sample sizes or numeric estimates provided in the abstract.
Privacy law encounters difficulties in addressing large-scale data processing and meaningful consent within employment relationships; anti-discrimination law faces evidentiary challenges in identifying algorithmic bias; doctrines of responsibility are expanding to encompass duties of oversight, verification, and explainability.
Legal analysis highlighting specific doctrinal challenges and emergent duties; no empirical tests or quantified measures included in the excerpt.
Traditional legal categories (privacy, consent, non-discrimination, employer responsibility) continue to apply formally but are increasingly strained in substance by the scale of data processing, opacity of AI systems, and their degree of autonomy.
Doctrinal critique and conceptual analysis provided in the paper; no empirical quantification of the degree of strain is supplied in the excerpt.
The decentralized and sector-specific regulatory approach reflects technological neutrality but exposes significant regulatory gaps, particularly with respect to transparency, accountability, and the protection of workers' rights.
Normative/legal analysis in the paper identifying gaps in a decentralized regulatory regime; specific case studies or empirical measures of gaps not provided in the excerpt.
Israel has not enacted a comprehensive statutory framework specifically governing the use of AI in the field of employment; regulation is implemented through a hybrid model of indirect application of existing legal doctrines (primarily privacy and labor law), soft-law instruments, collective bargaining agreements, and internal organizational and professional regulation.
Doctrinal and regulatory analysis reported in the paper describing Israel's legal/regulatory landscape; no legislative text counts or timeline analysis provided in the excerpt.
At the structural and macroeconomic level, artificial intelligence is reshaping the balance of power within the labor market and contributes to a gradual shift toward employer-driven dynamics.
Author's macroeconomic and structural analysis as presented in the paper; no specific datasets, methods, or sample sizes are reported in the excerpt.
The supply of AI-literate workers attenuates wage inequality effects.
Presented in the article as a distributional mechanism informed by synthesized theoretical and empirical findings; no concrete empirical methods or sample sizes are provided in the excerpt.
Ethical concerns—such as transparency, explainability, psychological effects, and responsible AI governance—are critical factors influencing employability outcomes.
Review synthesis highlighting ethical issues from empirical and industry literature as influential on employability outcomes.
There are significant AI adoption challenges in education and industry that affect employability and role transformation.
Synthesized evidence from industry reports and empirical studies discussed in the review highlighting barriers to adoption in education and industry.
Algorithmic management and monitoring have reduced employees’ autonomy and perceived work meaningfulness, contributing to 'AI anxiety' characterised by concerns about job loss, skill obsolescence, and diminished control.
Qualitative studies, survey evidence, and theoretical literature reviewed that document impacts of algorithmic management on autonomy, meaningfulness, and worker anxiety (mixed-methods literature).
Automation has intensified income inequality between high-skilled and low-skilled workers.
Synthesis of empirical literature linking automation adoption to widening wage and income gaps across skill groups (literature review).
Displacement effects have extended from manufacturing into cognitive roles such as clerical work and customer service.
Review of empirical studies documenting automation/substitution effects in cognitive, clerical, and customer-service roles (literature synthesis).
Automation has put downward pressure on wages.
Cited empirical studies and wage analyses in the reviewed literature indicating wage suppression associated with automation adoption (literature review).
AI and robotics have led to contractions in low-skilled occupations.
Synthesis of empirical literature reporting occupational contractions in low-skilled jobs following automation adoption (literature review).
Extensive empirical evidence shows that AI and robotics can substitute for rule-based, codifiable routine tasks.
Review cites extensive empirical studies demonstrating substitution of rule-based, codifiable routine tasks by AI/robotics (literature synthesis).