Evidence (1835 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Inequality
Remove filter
Enforcing static fairness constraints may exacerbate long-run disparities.
Statement referencing recent prior theoretical results and motivating literature; framed as background/motivation in the paper.
With strong exposure of low-wealth, high-MPC households and concentrated ownership, privately chosen automation can be excessive even though it raises high-skilled labor income.
Theoretical welfare/comparison analyses in the model with heterogeneous households (differing in wealth and marginal propensities to consume) and ownership concentration; shows private incentives lead to automation choices that are suboptimal from a social perspective under these parameter constellations.
Automation reduces paid human labor.
Model comparative statics in the same equilibrium framework showing substitution away from paid human labor as firms choose automation; result reported in the paper's static benchmark and general-equilibrium analysis.
These dynamics may produce an asymmetric barbell-shaped structure of value capture in advanced economies: high-volume synthetic production controlled by owners of AI infrastructure at one pole, and scarce, high-status human labor valued for verified human presence at the other.
Conceptual projection and economic argument in the paper (no empirical decomposition, distributional statistics, or sample reported in the excerpt).
AI compresses the value of standardized middle-tier labor by making good-enough synthetic substitutes scalable at low marginal cost, hollowing out the middle of the skill distribution currently categorized by knowledge work.
Conceptual/theoretical argument presented in the paper (no reported empirical sample, statistical analysis, or quantified experiment in the excerpt).
AI development may reduce firms' labor income share.
Further analysis reported in the paper linking firm-level AI development to reductions in the labor income share within firms.
AI increases the firm-level skill premium by substituting for low-skilled labor.
Mechanism analysis reported in the paper (firm-level regressions investigating labor composition / substitution effects following AI development).
Disparities may lead to AI bias and governance challenges that potentially leave the poorest communities excluded from the Fourth Industrial Revolution.
Paper lists AI bias and governance challenges as potential consequences of uneven AI development; presented as conceptual/ethical/political risks without empirical quantification in the excerpt.
These disparities risk causing economic isolation and social inequality.
Qualitative claim in the paper listing potential socio-economic risks of uneven AI adoption; no supporting empirical estimates in the excerpt.
These disparities carry the risk of a deepening digital divide.
Stated as a consequence/risk in the paper; presented qualitatively without empirical quantification in the excerpt.
Projections indicate that without additional measures, these disparities are likely to increase.
Paper reports forward-looking projections or scenario analysis (methods, assumptions, and quantitative projection details not given in the excerpt).
Low-income regions (in particular parts of Africa and South Asia) lag significantly behind in both education and access to digital technologies.
Statement in the paper based on comparative assessment of education levels and digital access across regions; the excerpt provides no numeric data or described sample.
Workers acquire skills through generative AI tools but lack credible ways to signal or validate these skills in competitive freelance markets (a structural challenge the paper terms 'invisible competencies').
Reported finding and conceptual contribution based on the paper's mixed-methods study (survey + semi-structured interviews).
There is a shift from learning as growth to learning as survival, where upskilling is oriented toward immediate market viability rather than long-term development.
Reported thematic finding from the paper's interviews and survey of freelance knowledge workers.
Freelancers do not treat generative AI as their primary learning resource due to inconsistency, lack of contextual relevance, and verification overhead.
Reported finding from the paper's mixed-methods study (survey + semi-structured interviews with freelance knowledge workers).
Freelance workers must continually acquire new skills to remain competitive in online labor markets, yet they lack the organizational training, mentorship, and infrastructure available to traditional employees.
Framing statement in the paper's introduction / literature review (not reported as an empirical result from this study).
Obstacles exist for healthcare workers in rural areas that limit the benefits of technology.
Review conclusion noting persistent obstacles for rural healthcare workers drawn from the literature; synthesis of qualitative/quantitative sources (no sample size in excerpt).
Indian healthcare faces barriers to technological integration such as financial issues, poor infrastructure, and regulatory problems.
Review-identifed barriers drawn from the literature (qualitative and quantitative studies summarized by the authors); no aggregate sample size reported in the excerpt.
Algorithmic collusion is a new form of market failure arising from the agentic economy.
Theoretical claim and analysis of market failure mechanisms; no empirical antitrust cases or simulation evidence included in the provided text.
The research also identifies policy loopholes and unequal AI preparedness on the continent.
Findings from the paper's systematic review highlighting gaps in policy frameworks and uneven preparedness across Sub‑Saharan African countries; no country‑level counts or indices provided in the summary.
Results indicate rising job displacement, industrial change, and inequality.
Aggregate findings reported from the systematic review pointing to increases in job displacement, structural industrial change, and inequality across studies; no aggregated numerical magnitudes provided in the summary.
They are a threat to semi-and unskilled jobs, particularly in manufacturing.
Conclusion from the systematic review synthesizing studies on automation risk to semi- and unskilled positions, especially in manufacturing; no numerical risk estimate provided in the summary.
Vulnerable populations—including low-skill workers, aging labour forces, and developing economies—are especially affected by AI-driven changes.
Abstract highlights special attention to vulnerable populations in the review and asserts differential impacts; no specific empirical estimates or sample sizes provided in abstract.
AI displaces routine cognitive and manual tasks.
Explicit finding reported in abstract based on the paper's systematic review of empirical studies (no individual study sample sizes or quantitative estimates provided in abstract).
This stratification produces trust-based inequality in who can leverage AI while sustaining credibility, voice, and liveness.
Analytical claim based on patterns in 16 interviews indicating differential capacities to conceal/humanize AI lead to unequal ability to both use AI and maintain audience trust and perceived authenticity.
Passing capacity is stratified by educational and professional capital, economic resources and team support, and platform position.
Interview evidence (n=16) showing creators with higher education/professional capital, more economic resources, team support, or advantageous platform positions report greater ability to conceal and perform AI-assisted content.
These invisible authenticity practices reallocate work from generation to downstream repair and performance, complicating claims that AI simply improves efficiency.
Derived from creators' accounts in 16 interviews describing extra downstream editing, verification, and performance labor required after AI generation.
Creators associate legible AI assistance with intertwined trust vulnerabilities, including epistemic unreliability, anticipated relational penalties, and platform authenticity regimes.
Thematic findings from 16 interviews in which creators express concerns about AI-generated content being epistemically unreliable, damaging relationships with audiences, and conflicting with platform authenticity norms.
On authenticity-oriented platforms, visible use of AI can be discrediting for creators.
Reported by creators across 16 in-depth interviews on Xiaohongshu and Douyin; qualitative thematic analysis identifying platform-specific authenticity norms and reputational consequences.
Each stakeholder in the supply chain may believe they are compliant; nevertheless, the integrated system may produce biased outcomes.
Conceptual argument based on literature synthesis and analysis of responsibility fragmentation (no empirical sample reported).
Information asymmetries mean deploying organizations bear legal responsibility without technical visibility into vendor-supplied algorithms, while vendors control implementations without meaningful disclosure requirements.
Regulatory analysis and literature review identifying mismatches in legal liability and technical visibility (no empirical sample reported).
A resume parser may function without bias independently but contribute to discrimination when integrated with specific ranking algorithms and filtering thresholds (illustrative example of interaction effects).
Illustrative example presented in conceptual analysis (no empirical test or sample reported).
Fragmented responsibilities create a critical problem: bias can emerge from interactions among components rather than from isolated elements, yet proprietary configurations prevent integrated evaluation of the full hiring system.
Argument and examples drawn from literature review and regulatory analysis; no empirical sample size reported.
Existing research examines bias through technical or regulatory lenses, but both perspectives overlook a fundamental challenge: modern AI hiring systems operate within complex supply chains where responsibility fragments across data vendors, model developers, platform providers, and deploying organizations.
Synthesis from literature review and conceptual analysis of AI hiring supply chains (no empirical sample reported).
The increasing adoption of AI systems in hiring has raised concerns about algorithmic bias and accountability, prompting regulatory responses including the EU AI Act, NYC Local Law 144, and Colorado's AI Act.
Literature review and regulatory analysis; cites existence of named laws/regulations as examples of regulatory responses (no sample size required).
These AI-driven systems create significant algorithmic bias risks, which poor corporate governance and lack of transparency in model development usually exacerbate.
Synthesis claim based on the systematic literature review (SLR) of 45 peer-reviewed publications (2022-2025) conducted as part of the study; presented as an analytical conclusion from that SLR.
There is a persistent female disadvantage in work intensity.
Analysis of EWCTS 2021 with IFR robot exposure measures using weighted logit models controlling for individual and job covariates and fixed effects; gender-specific patterns examined via interaction terms.
Ungoverned coupling between humans and AI can produce fragility, lock-in, polarization, and domination basins.
Theoretical/modeling analysis showing destabilizing dynamics and multiple basins of attraction when governance regularization is absent or weak; no empirical sample.
Classical robot ethics framed around obedience (e.g. Asimov's laws) is too narrow for contemporary AI systems.
Literature synthesis and conceptual argument drawing on developments in adaptive, generative, embodied, and embedded AI; no empirical sample reported.
Algorithmic management and monitoring have reduced employees’ autonomy and perceived work meaningfulness, contributing to 'AI anxiety' characterised by concerns about job loss, skill obsolescence, and diminished control.
Qualitative studies, survey evidence, and theoretical literature reviewed that document impacts of algorithmic management on autonomy, meaningfulness, and worker anxiety (mixed-methods literature).
Automation has intensified income inequality between high-skilled and low-skilled workers.
Synthesis of empirical literature linking automation adoption to widening wage and income gaps across skill groups (literature review).
Displacement effects have extended from manufacturing into cognitive roles such as clerical work and customer service.
Review of empirical studies documenting automation/substitution effects in cognitive, clerical, and customer-service roles (literature synthesis).
Automation has put downward pressure on wages.
Cited empirical studies and wage analyses in the reviewed literature indicating wage suppression associated with automation adoption (literature review).
AI and robotics have led to contractions in low-skilled occupations.
Synthesis of empirical literature reporting occupational contractions in low-skilled jobs following automation adoption (literature review).
Extensive empirical evidence shows that AI and robotics can substitute for rule-based, codifiable routine tasks.
Review cites extensive empirical studies demonstrating substitution of rule-based, codifiable routine tasks by AI/robotics (literature synthesis).
Artificial intelligence and robotic technologies are fundamentally reshaping labour markets and pose multifaceted challenges to workers engaged in routine and low-skilled tasks.
Narrative review of domestic and international scholarly literature over the past decade (literature review / synthesis).
Structural barriers, workforce biases, and digital skill gaps affect women’s participation in AI-enabled sectors.
Claim derived from the paper's synthesis of literature (peer-reviewed studies, policy analyses, preprints) identifying common barriers; the abstract does not report quantitative meta-analysis or specific sample sizes.
There is a stark geopolitical divide between 'AI Core' nations and the Global South; the Global South faces acute risks of 'Digital Dependency' and eroded digital sovereignty.
Cross-study synthesis in the systematic review (2018-2026) identifying geopolitical patterns and risks; abstract does not quantify the number of studies or present empirical effect sizes.
The 'black box' nature of automated systems undermines the democratic social contract and principles of procedural justice, epitomised by the Australian 'Robo-debt' scandal.
Case study material and literature synthesized in the systematic review referencing the Australian Robo-debt case as an exemplar; abstract does not provide primary data or sample sizes.
Consolidation of corporate control of critical technologies (driven by AI industrial strategies that do not center democratic economic governance) threatens key democratic and societal objectives.
Stated implication in the paper's opening argument; supported by the paper's conceptual framing and (as indicated) review of how past and emerging tech/AI industrial strategies interact with democratic objectives. No quantitative sample size provided in the excerpt.