Evidence (3224 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Labor Markets
Remove filter
At the structural and macroeconomic level, artificial intelligence is reshaping the balance of power within the labor market and contributes to a gradual shift toward employer-driven dynamics.
Author's macroeconomic and structural analysis as presented in the paper; no specific datasets, methods, or sample sizes are reported in the excerpt.
There is a persistent female disadvantage in work intensity.
Analysis of EWCTS 2021 with IFR robot exposure measures using weighted logit models controlling for individual and job covariates and fixed effects; gender-specific patterns examined via interaction terms.
Ethical concerns—such as transparency, explainability, psychological effects, and responsible AI governance—are critical factors influencing employability outcomes.
Review synthesis highlighting ethical issues from empirical and industry literature as influential on employability outcomes.
There are significant AI adoption challenges in education and industry that affect employability and role transformation.
Synthesized evidence from industry reports and empirical studies discussed in the review highlighting barriers to adoption in education and industry.
From the perspectives of 'personal subordination' and 'economic subordination', AIGC deeply and implicitly controls the labor process through mechanisms such as dynamic path planning, blurring the boundaries of determination.
Analytical/legal argument in the paper linking conceptual standards of subordination to specific algorithmic mechanisms (e.g., dynamic path planning); supported by mechanistic discussion but no reported empirical measurement or sample.
AIGC constantly challenges traditional standards for determining labor relations.
Paper's analytic claim based on conceptual/legal argument that algorithmic features of AIGC complicate application of existing labor-relation tests; no quantitative validation or sample size provided.
The transformation toward algorithmic enterprises raises critical concerns regarding agency, accountability, data monopolization, and algorithmic bias.
Presented as a principal concern in the paper's conceptual discussion and interdisciplinary critique; based on analysis of governance and ethical literature rather than new empirical evidence in the abstract.
Algorithmic management and monitoring have reduced employees’ autonomy and perceived work meaningfulness, contributing to 'AI anxiety' characterised by concerns about job loss, skill obsolescence, and diminished control.
Qualitative studies, survey evidence, and theoretical literature reviewed that document impacts of algorithmic management on autonomy, meaningfulness, and worker anxiety (mixed-methods literature).
Automation has intensified income inequality between high-skilled and low-skilled workers.
Synthesis of empirical literature linking automation adoption to widening wage and income gaps across skill groups (literature review).
Displacement effects have extended from manufacturing into cognitive roles such as clerical work and customer service.
Review of empirical studies documenting automation/substitution effects in cognitive, clerical, and customer-service roles (literature synthesis).
Automation has put downward pressure on wages.
Cited empirical studies and wage analyses in the reviewed literature indicating wage suppression associated with automation adoption (literature review).
AI and robotics have led to contractions in low-skilled occupations.
Synthesis of empirical literature reporting occupational contractions in low-skilled jobs following automation adoption (literature review).
Extensive empirical evidence shows that AI and robotics can substitute for rule-based, codifiable routine tasks.
Review cites extensive empirical studies demonstrating substitution of rule-based, codifiable routine tasks by AI/robotics (literature synthesis).
Artificial intelligence and robotic technologies are fundamentally reshaping labour markets and pose multifaceted challenges to workers engaged in routine and low-skilled tasks.
Narrative review of domestic and international scholarly literature over the past decade (literature review / synthesis).
Structural barriers, workforce biases, and digital skill gaps affect women’s participation in AI-enabled sectors.
Claim derived from the paper's synthesis of literature (peer-reviewed studies, policy analyses, preprints) identifying common barriers; the abstract does not report quantitative meta-analysis or specific sample sizes.
Routine-intensive sectors exhibit higher susceptibility to automation.
Synthesis result reported in the paper based on the systematic review of sector-specific literature (no numeric aggregation or sample size provided in the abstract).
The policy and research challenge posed by platform-mediated automation is not merely job quantity (technological unemployment) but institutional continuity — how societies reproduce practical competence when platforms optimize for efficiency rather than formation.
Normative and conceptual claim developed through literature synthesis (institutional economics, platform governance, workforce development); presented as an analytical reframing rather than an empirically tested hypothesis.
Entry-level roles have historically functioned as apprenticeships in which workers acquire tacit knowledge and critical judgment; if platforms curtail these formative occupational layers, organizations may lack future workers capable of exercising contextual reasoning required to manage complex systems.
Institutional economics and workforce development literature cited in the paper; conceptual synthesis without original empirical measurement reported.
Platform-mediated automation risks hollowing out labor structures from both directions: eroding repetitive, junior roles from below and automating supervisory coordination functions from above.
Theoretical argument synthesizing institutional economics and platform literature; articulated as a conceptual risk rather than demonstrated with original empirical data.
Algorithmic systems are displacing routine tasks across both low-wage entry-level work and middle-management functions.
Stated in paper's argumentation; supported by a literature-based review drawing on platform governance literature and recent research on AI-enhanced automation (no original empirical sample or quantitative study reported).
An alternative specification that makes different choices about the timing of the pervasiveness of AI yields less robust results, though it also suggests that AI is labor saving.
Reported sensitivity analysis / alternative empirical specification in the paper; authors state the alternative yields less robust results but still indicates labor-saving effects.
Our baseline model finds evidence that AI is input saving.
Outcome reported from the baseline empirical specification indicating reductions in inputs associated with AI (authors' baseline model results).
Thick subjectivist theories of meaning in life and meaningful work—those theories that emphasize that meaning-conferring activities are historically formed—enable us to appreciate how some losses cannot be made up, even if there are in principle ample alternative sources of meaning to be found elsewhere.
Theoretical claim about the explanatory power of 'thick subjectivist' normative theories; argued via conceptual philosophical analysis in the paper (no empirical testing reported).
Even if there are rich non-work sources of meaning, this does not entail that there is not a significant and multi-faceted loss of meaning, one that cannot be compensated for or offset elsewhere.
Normative/philosophical argument presented in the paper (conceptual reasoning rather than empirical measurement; no sample size).
The argument that non-work goods can replace work-derived meaning fails to consider the embeddedness and thickness of meaning in human lives.
Philosophical/theoretical critique based on conceptual analysis (author's argument invoking the notions of embeddedness and thickness of meaning; no empirical study reported).
Platforms can exploit workers' uncertainty about the cost of labor to effectively suppress wages.
Interpretation / implication drawn from the theoretical model and the result that a platform can achieve coverage while paying only O(log(M)/M) fraction of total labor cost under assumptions about workers' cost estimates.
There exists a simple pricing strategy for the platform that covers all M tasks with wait time O(M) while paying only an O(log(M)/M) fraction of the total cost of labor.
Theoretical result from the paper's posted-price procurement model under stated assumptions on workers' estimated costs; formal analysis/proof showing existence of such a pricing strategy for general M (no empirical sample).
Because the technical threshold for this transition is already crossed at modest engineering effort, the window for protective frameworks covering disclosure, consent, compensation and deployment restriction is the present, while deployment remains optional rather than infrastructural.
Authors' normative claim based on their implementation (distillation and deployment) and interpretation that modest engineering sufficed; used to argue policy urgency for disclosure/consent/compensation frameworks.
We term this the Relic condition: when publication systems make stable reasoning architectures legible, extractable and cheaply deployable, the public record of intellectual labor becomes raw material for its own functional replacement.
Conceptual framing introduced by the authors as an interpretation of the observed results and their implications; not an empirical measurement but a named condition/argument.
AI can exacerbate occupational polarization, digital exclusion, and discriminatory outcomes when models are trained on biased data or deployed without transparency and accountability.
Thematic synthesis across included studies identifying mechanisms (biased training data, lack of transparency/accountability) linked to negative distributional outcomes (occupational polarization, digital exclusion, discrimination).
Inherent algorithmic opacity and historical data biases tend to give rise to obvious group prejudices based on gender, educational background, age, and regional origin, thereby further exacerbating the structural inequalities that exist in the current employment market.
Claim made in abstract referencing known sources of algorithmic bias (opacity, historical data bias) and listing affected group attributes; presented as a problem motivating the study, without specific empirical statistics in the abstract.
The opacity, fluency, and low-friction interaction patterns of LLMs obscure the boundary between human and machine contribution, leading users to infer competence from outputs rather than from the processes that generate them.
Theoretical argument grounded in prior literature on automation bias and cognitive offloading; presented as explanatory mechanism in the paper rather than an empirically tested causal estimate.
The paper introduces the 'LLM fallacy,' a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence, producing a systematic divergence between perceived and actual capability.
Conceptual/theoretical claim and formal definition offered in the paper; no empirical validation reported in the abstract.
Low-skill roles in packaging, sorting, and basic assembly face a high risk of automation.
Paper's findings/prediction derived from task-level classification (routine/repetitive tasks) applied to jobs in Nagpur's medium enterprises; no reported sample size or quantified risk metrics in the excerpt.
Regulatory and labor friction is scored per sector using actual compliance frameworks (Basel III, FDA AI guidance, HIPAA) and BLS union density data, and is applied as a haircut to base adoption rates via an S-curve ramp.
Paper description of friction scoring method referencing specific regulatory frameworks and BLS union density; applied in the model as a haircut and S-curve adoption ramp.
Restricting AI productivity gains to the labor-generated portion of each sector's gross value added reduces the naive addressable base by approximately 72 percent.
Bottom-up sectoral model described in the paper that applies labor share to gross value added across 21 NAICS industries; the paper explicitly states the labor-generated restriction reduces the naive addressable base by ~72%.
These advancements have raised concerns regarding workforce redundancy, particularly for routine and low-skilled jobs.
Synthesis of concerns documented in the reviewed literature and observed sectoral trends (literature review; qualitative synthesis).
Coder employment has continued to grow in recent years, though much more slowly than it did pre-2022.
Time-series comparison of coder employment levels/growth rates from CPS before and after 2022.
The deceleration in coder employment is not attributable to coders' exposure to slowing industries, implying an occupation-specific shock around the introduction of ChatGPT.
Regression/controlled analysis using a novel industry-level control variable for industry shocks to separate industry-level from occupation-specific effects.
Aggregate employment of coders has decelerated sharply since the introduction of ChatGPT.
Empirical analysis linking O*NET to CPS employment data showing a sharp slowdown in coder employment growth coinciding with ChatGPT's introduction.
Job insecurity emerges as a critical mediating factor influencing employee attitudes and behavioural responses to generative AI, including upskilling intentions and resistance to technological change.
Review-level synthesis identifying job insecurity reported in included studies as mediating relationships between AI adoption and employee attitudes/behaviours (e.g., upskilling, resistance).
Employees express concerns about role displacement (job loss or role changes) associated with generative AI adoption.
Reported across multiple studies included in the review; the review summarises these concerns as part of mixed employee perceptions.
These positive perceptions coexist with employee concerns about skill obsolescence related to generative AI.
Synthesis of studies included in the review documenting worker concerns about skills becoming obsolete due to AI-driven changes.
AI infrastructure owners may command more wealth and capability than most governments, threatening the future viability or authority of the nation-state.
Futuristic projection based on the paper's modeling and synthesis of wealth/capability concentration under AI; no empirical measures or comparative data versus governments provided in the excerpt.
Universal Basic Income (UBI), evaluated through incentive-structure lens, will default to a pacification mechanism rather than a genuine solution in the absence of a revolutionary threat that historically forced redistribution.
Normative and theoretical analysis of incentive structures and historical mechanisms of redistribution; the excerpt presents this as an argument rather than reporting empirical trials or quantified outcomes.
Unlike previous feudal orders, this one may prove uniquely resistant to revolution because the mechanisms of enforcement (autonomous weapons, AI surveillance, algorithmic propaganda) do not require human cooperation and therefore cannot be undermined by human dissent.
Logical and theoretical claim based on characteristics of AI-enabled enforcement technologies; presented as an argument rather than an empirically tested finding in the excerpt.
Under this emerging order, the vast majority of humanity will lose their political leverage.
Theoretical and historical argument linking concentration of infrastructure control to political disempowerment; no empirical metrics or sample size provided in the excerpt.
Under this emerging order, the vast majority of humanity will lose their labor value.
Claim made via theoretical argument about automation and AI replacing labor value; no quantitative empirical evidence or sample detailed in the excerpt.
This structural transformation could stabilize into a neo-feudal equilibrium in which a vanishingly small class of infrastructure owners wields power comparable to pre-Enlightenment monarchs.
Futuristic projection and normative/historical analogy based on conceptual modeling of class structure under AGI; the excerpt gives no empirical data or formal model outputs.
The convergence of geopolitical fragmentation (democratic decline) and AI-driven economic concentration is producing a structural transformation unprecedented in human history.
Theoretical synthesis and historical comparison; the paper presents this as an argument based on conceptual modeling and historical analogy; no specific empirical test or sample noted in the excerpt.