Evidence (2480 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Labor Markets
Remove filter
Current national and regional approaches to AI governance are often fragmented, focusing narrowly on industrial competition, piecemeal regulation, or abstract ethical principles.
Asserted in abstract; implies a review/comparison of existing policies but the abstract does not detail methods or sample beyond later comparative analysis.
AI deepens inequality.
Asserted in abstract; the abstract does not state empirical methods or data backing this claim.
AI's current trajectory exacerbates labor market polarization.
Asserted in abstract; no study design or empirical sample specified in the abstract.
AI adoption increases psychosocial pressure on workers.
Themes surfaced via content analysis of recent peer-reviewed literature on AI and workforce wellbeing within the qualitative library research (specific studies not listed).
AI adoption contributes to inequality (uneven distribution of benefits and opportunities).
Synthesis of arguments and empirical findings from accredited journals included in the literature-based study (sources not enumerated).
AI leads to skill mismatch between workers and emerging job requirements.
Identified through thematic analysis of recent literature on workforce dynamics and skills in the qualitative review (specific article count not reported).
AI causes job displacement.
Recurring finding across reviewed accredited journal articles summarized via thematic content analysis in the library research (no quantitative sample provided).
Employers that understand their largeness may act strategically when hiring and setting wages, generating misallocation and harming workers.
Theoretical argument made by the authors; no micro-econometric estimates, experiments, or sample descriptions are provided in the excerpt to substantiate degree or prevalence of strategic behavior.
This micro approach is at odds with the reality of labor markets in which monopsony potentially matters most.
Interpretive claim by the authors contrasting model assumptions with observed market structure; no empirical data, sample size, or specific markets cited in the excerpt.
Discussions among faculty on major higher-education subreddits enact negotiations over surveillance regimes, accountability structures, and academic precarity in real time.
Interpretive finding from thematic analysis of Reddit threads: posts and replies about AI-related classroom issues (e.g., cheating, assessment, policy) show active contention over surveillance and accountability practices and concerns about job security/precariat conditions. (Specific thread counts, timestamps, and coder reliability are not provided in the excerpt.)
Findings reveal that discussions of student cheating, AI policies, writing practices, and faculty labor are not merely technical debates but sites where surveillance regimes, accountability structures, and academic precarity are negotiated in real time.
Empirical claim based on thematic content analysis of Reddit discussions that flagged threads about student cheating, AI policy, writing practices, and faculty labor and interpreted them as spaces where concerns about surveillance, accountability, and precarity are articulated and contested. (Specific examples, counts, and illustrative quotes not included in the excerpt.)
AI intensifies asymmetries of power and creates 'algorithmic hierarchies' that reinforce digital dependence, especially in the Global South.
Analytic finding derived from document review and comparative analysis; no quantitative measures or empirical case sample reported in the text to substantiate scale or prevalence.
Reductions or cuts to governmental translation services intensify employment gaps, increase dependence on informal translation, and exacerbate systemic injustices for LEP immigrants.
Mixed-methods evidence from survey responses (n=150) indicating outcomes after policy reductions, and thematic findings from employer (n=50) and provider (n=20) interviews documenting increased informal translation reliance and adverse labor outcomes.
AI integration into resort-to-force decision-making organizations raises important concerns.
Conceptual claim discussed by the author; the paper does not present empirical data, incident analyses, or quantified risk assessments supporting this claim within the provided excerpt.
Governing the complexity introduced by military AI integration is urgent but currently lacks clear precedents.
Authorative claim grounded in argumentation and review-style reasoning; no systematic review or empirical mapping of precedents is provided in the text.
We can expect increased organizational complexity in military decision-making institutions as AI proliferates.
Theoretical inference presented by the author; no empirical methods or measurements (e.g., complexity metrics, case studies, or sample sizes) are reported.
These findings challenge optimistic narratives of seamless workforce adaptation and demonstrate that emerging economies require active pathway creation, not passive skill matching.
Synthesis and interpretation of the quantitative results from the knowledge graph analysis (percent at risk, percent with viable pathways, number of feasible transitions, skill-leverage findings) used to draw policy implications about workforce adaptation strategies.
The remaining 75.6% of at-risk workers face a structural mobility barrier requiring comprehensive reskilling rather than incremental upskilling.
Complement of the 24.4% with viable pathways (i.e., 100% - 24.4% = 75.6%) derived from the knowledge-graph transition analysis; interpretation that lacking the viability thresholds implies need for comprehensive reskilling.
Raising fertility actually worsens the fiscal picture in the medium term, since it takes decades for newborns to grow up and join the workforce.
Model scenario simulations that raise fertility rates and project fiscal outcomes over time, showing medium-term deterioration due to added dependents before working-age entry.
These demographic trends squeeze public finances from both sides—fewer people paying taxes and more people drawing on pensions and healthcare.
Conceptual linkage implemented in the integrated system dynamics model that couples demographic cohorts to tax revenue and age-linked public spending (pensions, healthcare).
Current research in this area has a primary focus on methodology and computer science rather than applied occupational health questions.
Authors' synthesis from the review of existing studies (the paper reports that reviewed studies emphasize methodological and computer science aspects; exact counts or proportions not provided in the excerpt).
The application of machine learning in occupational mental health research remains in its preliminary stages.
Claim stated by the paper based on the authors' literature review of the field (review methodology referenced in the paper; number of studies or specific inclusion criteria not provided in the provided excerpt).
The shadow digital economy poses risks to national security.
Argumentative discussion and reviewed examples linking SDE activities to national security risks (method: conceptual/legal/institutional analysis; no national-security incident count or quantified risk assessment provided).
SDE activity extends beyond direct financial loss, eroding consumer trust and damaging brand reputation through data breaches, fraud, and counterfeiting.
Claim is supported by literature review and illustrative examples/case discussions in the paper (methods: qualitative synthesis; no aggregated empirical measurement of trust or reputational loss reported).
Institutional traps that sustain shadow employment exist and the SDE perpetuates informal and illicit labor arrangements.
Analytic argument and institutional analysis presented in the paper identifying mechanisms ('institutional traps'); evidence appears to be conceptual and drawn from reviewed literature and examples rather than stated empirical longitudinal data.
The shadow digital economy (SDE) is a growing phenomenon amid digital transformation and rising information costs.
Framing and literature review presented in the paper; descriptive synthesis of prior definitions and trends (no empirical sample size reported).
Many core university functions can now be achieved through AI-powered alternatives, potentially rendering conventional models obsolete for many learners.
Analytical assessment by the authors, without reported empirical testing or quantified methodology; based on review of AI capabilities and extrapolation.
Universities' core value proposition is challenged and potentially displaced by AI technologies as they alter how knowledge is accessed, created, and validated.
Authors' analytical argument drawing on technological, economic, and social drivers; presented as synthesis rather than empirical proof (no sample size or empirical method reported).
Robotics reduce labor dependence in greenhouse operations.
Study conclusions drawn from modeled impacts on employment composition and labor requirements when comparing robotics investments to traditional greenhouse investment scenarios (I–O modeling, IMPLAN 2022).
Traditional IT service hiring will be displaced by expansion of product-focused roles and Global Capability Centres (GCCs).
Synthesis of industry reports and workforce data indicating shifts in hiring patterns; the abstract does not report sample sizes or exact metrics.
The scalability of the Photo Big 5 enables new academic insights into the role of personality in labor markets, but its growing use in industry screening raises important ethical concerns regarding statistical discrimination and individual autonomy.
Argument in the paper based on the methodological scalability (AI + large LinkedIn microdata) and observed predictive links to labor-market outcomes; authors raise normative concerns about industry adoption and implications for discrimination and autonomy.
What remains needed is rigorous advice to policymakers concerned about rapid increases in labor churn, scientific development, labor–capital shifts, or existential risk.
Normative conclusion drawn by the author from gaps identified in the seven-book review (qualitative assessment of unmet policy-relevant analysis); sample = 7 books.
The reviewed works offer little guidance regarding the transformative scenarios considered plausible by many AI researchers.
Author's evaluative judgment based on the content and emphases of the seven books (qualitative gap analysis); sample = 7 books.
AI heightens job insecurity, particularly in organisations lacking structured reskilling programs.
Stated finding derived from the mixed-method study and Scopus database analysis; framed with a conditional modifier pointing to organisations without structured reskilling programs. (Summary does not provide sample size, effect sizes, or statistical significance.)
Reliance on H-2A has limitations, including requirements to provide housing and training and higher mandated wages compared with local seasonal help.
Paper's qualitative assessment of H-2A program constraints; no empirical measures or comparative wage data provided in the excerpt.
Declining US birth rates may not alleviate the nursery labor problem in the coming decades.
Projection/interpretation based on demographic trend (declining birth rates) noted in the paper; no demographic model or quantitative projection provided in the excerpt.
Despite high overall employment (80% for ages 25–54), nurseries reported they were prevented from hiring new workers due to high wages and unqualified workers.
Reported responses from nurseries (survey/industry responses) referenced in the paper; sample size and survey details not provided in the excerpt.
The US nursery industry faces a labor deficit.
Statement in the paper based on industry reporting; specific methodology or sample size not provided in the excerpt.
Selection of a human-LLM archetype brings important risks and considerations for the designers of human-AI decision-making systems.
Analytic discussion and synthesis of evaluation results and literature review; tradeoffs surfaced in the paper (e.g., decision control, social hierarchies, cognitive forcing strategies, information requirements).
Gendered perceptions of AI's social and ethical consequences, rather than access or capability, are the primary drivers of unequal GenAI adoption.
Comparative model results from the 2023–2024 nationally representative UK survey showing perceptions (societal-risk index) have greater explanatory/predictive power than measures of access (e.g., device/internet access) or capability (digital literacy, education).
Intersectional analyses show the largest gender disparities in GenAI use arise among younger, digitally fluent individuals with high societal risk concerns, where gender gaps in personal use exceed 45 percentage points.
Subgroup (intersectional) analysis of the nationally representative 2023–2024 UK survey data stratified by age, digital fluency, and societal-risk concern levels; reported gender gap >45 percentage points in specified subgroup.
The societal-risk concerns index ranks among the strongest predictors of GenAI adoption for women across all age groups, surpassing digital literacy and education for young women.
Multivariable models and predictor ranking using the 2023–2024 UK survey data showing relative predictive strength of the concerns index versus measures of digital literacy and education, with subgroup (age × gender) comparisons.
The societal-risk concerns index explains between 9 and 18 percent of the variation in GenAI adoption.
Regression/statistical models using the composite concerns index as a predictor of GenAI adoption in the nationally representative 2023–2024 UK survey; reported explained variation (9–18%).
Women adopt GenAI less often than men because they perceive its societal risks differently.
Statistical analysis linking a constructed composite societal-risk concerns index (mental health, privacy, climate impact, labor market disruption) to GenAI adoption, using the UK 2023–2024 survey; models compare explanatory power of perceptions versus access/capability variables.
Women adopt GenAI substantially less often than men.
Analysis of the 2023–2024 nationally representative UK survey data comparing personal use/adoption rates by gender.
Across survey and experimental evidence, perceptions that AI will replace labor—regardless of actual labor-market outcomes—may decrease democratic legitimacy and public engagement in shaping AI's future.
Synthesis of correlational findings from the large European survey (N = 37,079) and causal evidence from two preregistered experiments (UK N = 1,202; US N = 1,200).
Controlling for technology-related, political, and sociodemographic factors, perceiving AI as labor-replacing (vs. labor-creating) is associated with lower political engagement with technology.
Multivariable regression analyses on the large European survey (N = 37,079) with controls for technology-related, political, and sociodemographic factors.
Controlling for technology-related, political, and sociodemographic factors, perceiving AI as labor-replacing (vs. labor-creating) is associated with lower satisfaction with democracy.
Multivariable regression analyses on the same large survey (N = 37,079) including controls for technology-related attitudes, political variables, and sociodemographic covariates.
There are ethical concerns surrounding AI and automation including algorithmic decision-making, workforce exclusion, and inequality in access to reskilling opportunities.
Raised as an ethical analysis within the paper's conceptual framework; no empirical study, surveys, or quantified measures of these ethical issues are reported in this paper.
AI is eliminating repeated (routine) jobs.
Stated as part of the paper's argument about AI's dual impact; supported by conceptual analysis rather than new empirical evidence in this manuscript (no sample size or empirical method reported).