Evidence (4793 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Productivity
Remove filter
Simulations project measurable reductions in defect rates under AI-HRM scenarios.
Regression-based simulations of the counterfactual model include defect reduction as an organizational outcome and project decreases in defect rates when HR processes are AI-supported.
Simulations show notable reductions in absenteeism under the AI-HRM scenario.
Predictive estimation and regression-based simulations projecting absenteeism rates under counterfactual AI-supported HR processes using the industrial firm dataset.
The number of granted AI-related patents is negatively associated with GDP growth in the model.
Panel econometric analysis using OLS, Fixed Effects, Difference GMM and System GMM estimators; AI innovation proxied by the number of granted AI-related patents; reported negative association across the applied estimators (sample of countries and time span not specified in the provided summary).
Discussions among faculty on major higher-education subreddits enact negotiations over surveillance regimes, accountability structures, and academic precarity in real time.
Interpretive finding from thematic analysis of Reddit threads: posts and replies about AI-related classroom issues (e.g., cheating, assessment, policy) show active contention over surveillance and accountability practices and concerns about job security/precariat conditions. (Specific thread counts, timestamps, and coder reliability are not provided in the excerpt.)
Findings reveal that discussions of student cheating, AI policies, writing practices, and faculty labor are not merely technical debates but sites where surveillance regimes, accountability structures, and academic precarity are negotiated in real time.
Empirical claim based on thematic content analysis of Reddit discussions that flagged threads about student cheating, AI policy, writing practices, and faculty labor and interpreted them as spaces where concerns about surveillance, accountability, and precarity are articulated and contested. (Specific examples, counts, and illustrative quotes not included in the excerpt.)
Reductions or cuts to governmental translation services intensify employment gaps, increase dependence on informal translation, and exacerbate systemic injustices for LEP immigrants.
Mixed-methods evidence from survey responses (n=150) indicating outcomes after policy reductions, and thematic findings from employer (n=50) and provider (n=20) interviews documenting increased informal translation reliance and adverse labor outcomes.
Technological variations contribute to limiting sustainability efforts.
Highlighted in the paper's analysis of governance challenges (listed alongside corruption and administrative inefficiencies) and referenced in international examples; no specific empirical measurement or sample size is provided in the summary.
Deep-rooted governance issues — specifically corruption, administrative inefficiencies, policy gaps, and technological variations — restrict sustainability efforts, particularly in developing and transition economies.
Analytical emphasis in the paper drawing on global governance frameworks and case illustrations from international instances; the summary does not report empirical sample sizes or quantitative measures.
Raising fertility actually worsens the fiscal picture in the medium term, since it takes decades for newborns to grow up and join the workforce.
Model scenario simulations that raise fertility rates and project fiscal outcomes over time, showing medium-term deterioration due to added dependents before working-age entry.
These demographic trends squeeze public finances from both sides—fewer people paying taxes and more people drawing on pensions and healthcare.
Conceptual linkage implemented in the integrated system dynamics model that couples demographic cohorts to tax revenue and age-linked public spending (pensions, healthcare).
Current research in this area has a primary focus on methodology and computer science rather than applied occupational health questions.
Authors' synthesis from the review of existing studies (the paper reports that reviewed studies emphasize methodological and computer science aspects; exact counts or proportions not provided in the excerpt).
The application of machine learning in occupational mental health research remains in its preliminary stages.
Claim stated by the paper based on the authors' literature review of the field (review methodology referenced in the paper; number of studies or specific inclusion criteria not provided in the provided excerpt).
Many core university functions can now be achieved through AI-powered alternatives, potentially rendering conventional models obsolete for many learners.
Analytical assessment by the authors, without reported empirical testing or quantified methodology; based on review of AI capabilities and extrapolation.
Universities' core value proposition is challenged and potentially displaced by AI technologies as they alter how knowledge is accessed, created, and validated.
Authors' analytical argument drawing on technological, economic, and social drivers; presented as synthesis rather than empirical proof (no sample size or empirical method reported).
Robotics reduce labor dependence in greenhouse operations.
Study conclusions drawn from modeled impacts on employment composition and labor requirements when comparing robotics investments to traditional greenhouse investment scenarios (I–O modeling, IMPLAN 2022).
Traditional IT service hiring will be displaced by expansion of product-focused roles and Global Capability Centres (GCCs).
Synthesis of industry reports and workforce data indicating shifts in hiring patterns; the abstract does not report sample sizes or exact metrics.
Psychological barriers — specifically algorithm aversion, AI-induced job insecurity, technostress, and diminished occupational identity — impede effective AI integration across U.S. industries.
Literature synthesis of empirical and theoretical work in AI–HRM and organizational psychology cited in the paper (summary does not report primary-study sample sizes).
Workforce psychological readiness, rather than technological capability alone, constitutes the critical bottleneck in organizational AI adoption.
Synthesis of emerging empirical AI–HRM research and theoretical integration (paper reports 'findings' from this synthesis; no primary-sample-size details provided in the summary).
The integration of AI into U.S. workplaces represents a profound organizational psychology challenge that extends well beyond mere technology adoption.
Conceptual/theoretical argument based on literature synthesis; draws on established theories (Technology Acceptance Model, Human–AI Symbiosis Theory, Job Demands–Resources Model, Organizational Trust Theory) and cited empirical AI–HRM studies (no specific sample sizes or primary data reported in the summary).
What remains needed is rigorous advice to policymakers concerned about rapid increases in labor churn, scientific development, labor–capital shifts, or existential risk.
Normative conclusion drawn by the author from gaps identified in the seven-book review (qualitative assessment of unmet policy-relevant analysis); sample = 7 books.
The reviewed works offer little guidance regarding the transformative scenarios considered plausible by many AI researchers.
Author's evaluative judgment based on the content and emphases of the seven books (qualitative gap analysis); sample = 7 books.
There are significant implementation challenges for Material Passports, particularly for existing buildings.
Aggregate findings from included studies highlighting technical, data-collection, legacy-information, and workflow barriers when applying MPs to existing building stock.
Circular economy (CE) adoption in the Architecture, Engineering, and Construction (AEC) industry is hampered by data scarcity.
Synthesis of included literature and authors' framing in the introduction and analysis sections indicating repeated identification of data scarcity as a barrier to CE adoption in AEC.
Selection of a human-LLM archetype brings important risks and considerations for the designers of human-AI decision-making systems.
Analytic discussion and synthesis of evaluation results and literature review; tradeoffs surfaced in the paper (e.g., decision control, social hierarchies, cognitive forcing strategies, information requirements).
Gendered perceptions of AI's social and ethical consequences, rather than access or capability, are the primary drivers of unequal GenAI adoption.
Comparative model results from the 2023–2024 nationally representative UK survey showing perceptions (societal-risk index) have greater explanatory/predictive power than measures of access (e.g., device/internet access) or capability (digital literacy, education).
Intersectional analyses show the largest gender disparities in GenAI use arise among younger, digitally fluent individuals with high societal risk concerns, where gender gaps in personal use exceed 45 percentage points.
Subgroup (intersectional) analysis of the nationally representative 2023–2024 UK survey data stratified by age, digital fluency, and societal-risk concern levels; reported gender gap >45 percentage points in specified subgroup.
The societal-risk concerns index ranks among the strongest predictors of GenAI adoption for women across all age groups, surpassing digital literacy and education for young women.
Multivariable models and predictor ranking using the 2023–2024 UK survey data showing relative predictive strength of the concerns index versus measures of digital literacy and education, with subgroup (age × gender) comparisons.
The societal-risk concerns index explains between 9 and 18 percent of the variation in GenAI adoption.
Regression/statistical models using the composite concerns index as a predictor of GenAI adoption in the nationally representative 2023–2024 UK survey; reported explained variation (9–18%).
Women adopt GenAI less often than men because they perceive its societal risks differently.
Statistical analysis linking a constructed composite societal-risk concerns index (mental health, privacy, climate impact, labor market disruption) to GenAI adoption, using the UK 2023–2024 survey; models compare explanatory power of perceptions versus access/capability variables.
Women adopt GenAI substantially less often than men.
Analysis of the 2023–2024 nationally representative UK survey data comparing personal use/adoption rates by gender.
In abundant-resource conditions, emergent tribe formation slightly increases system overload (i.e., makes the near-zero overload slightly worse).
Empirical observations reported in the paper indicating a modest increase in overload when tribes form under abundant resources.
When resources are scarce, AI model diversity and reinforcement learning increase dangerous system overload.
Empirical results from the paper's AI-agent population experiments (simulations/real-agent trials) combined with mathematical analysis indicating increased overload under scarcity when model diversity and individual RL are present.
There are ethical concerns surrounding AI and automation including algorithmic decision-making, workforce exclusion, and inequality in access to reskilling opportunities.
Raised as an ethical analysis within the paper's conceptual framework; no empirical study, surveys, or quantified measures of these ethical issues are reported in this paper.
AI is eliminating repeated (routine) jobs.
Stated as part of the paper's argument about AI's dual impact; supported by conceptual analysis rather than new empirical evidence in this manuscript (no sample size or empirical method reported).
Artificial intelligence and automation are reshaping jobs, transforming them from a steady source of income to a dynamic process highly influenced by technology, flexibility, and uncertainty.
Central analytical claim made in the paper based on conceptual reasoning; the paper does not report empirical measures, datasets, or sample sizes to support the transformation quantitatively.
AI and automation pose significant challenges to employment stability, skill relevance, and human dignity.
Claim presented within the paper's conceptual and analytical discussion of AI's dual impacts; no empirical study, sample size, or quantitative measures provided in this paper.
Information processing constraints hinder managers' ability to effectively integrate tax planning and core business strategies (i.e., processing constraints hinder effective tax planning).
The paper reports novel empirical evidence consistent with this theoretical claim based on observed associations and tests linking AI, information quality, capital management, and tax effectiveness in the 2010–2018 sample.
There are challenges to adopting AI in HRM within IT firms.
Identified through the literature review and the empirical study involving HR professionals; the summary notes challenges but does not enumerate or quantify them.
Performance expectancy is a negative factor related to the company's decision to adopt AI (attributed to initial implementation challenges reducing perceived ease of use).
PLS-SEM analysis of survey data from 207 firms; the paper reports a negative association between performance expectancy and AI Adoption and offers a rationale about 'reality check' and initial implementation difficulties.
LLM explanations foster inappropriate reliance and trust on the data-extraction AI: participants were less likely to detect errors when provided with LLM explanations.
User study measuring error-detection rates and trust/reliance indicators across conditions (full text, passage retrieval, LLM explanations). The LLM-explanation condition showed lower error-detection and greater reliance/trust compared to other conditions.
Governance quality becomes negative and statistically significant at the 0.90 quantile (τ = 0.90), which the paper interprets as evidence of institutional rigidity in advanced financial systems.
MMQR results showing a negative, significant coefficient for governance quality at τ = 0.90; interpretation provided by the authors linking this sign to institutional rigidity.
AI use also poses risks, including systemic discrimination, privacy invasion, and commodification of talent.
Qualitative synthesis and documented instances in the reviewed literature (n=85) reporting discriminatory outcomes, privacy concerns, and labor commodification effects associated with algorithmic HR tools.
Qualitative synthesis reveals a 'gray zone' in labor relations and a 'black box' in algorithmic data processing, both exposing businesses to procedural injustice risks.
Thematic/qualitative synthesis of findings from the reviewed literature (n=85) highlighting issues of labor relations and algorithmic opacity leading to procedural fairness concerns.
Digital transformation raises challenges related to privacy, inequality, and regulatory scrutiny.
Identified as a key challenge in the paper; the abstract provides no details on how privacy concerns, inequality measures, or regulatory incidents were documented or quantified.
We lack frameworks for articulating how cultural outputs might be actively beneficial.
Authors' identification of a gap in evaluation theory and practice (conceptual analysis); no systematic literature review details provided in the excerpt.
Current AI evaluation practices show a critical asymmetry: while AI assessments rigorously measure both benefits and harms of intelligence, they focus almost exclusively on cultural harms.
Authors' review/ critique of existing evaluation frameworks and metrics (qualitative analysis in the paper); the excerpt does not list the reviewed studies or their number.
The field of AI is unprepared to measure or respond to how the proliferation of entertaining AI-generated content will impact society.
Authors' assessment of current evaluation practices and frameworks (qualitative analysis presented in the paper); no empirical metrics or sample sizes provided in the excerpt.
Interpreting the literature through a socio-technical lens reveals a persistent misalignment between GenAI's fast-evolving technical subsystem and the slower-adapting social subsystem.
Authors' conceptual interpretation of the reviewed studies (28 papers) using socio-technical theory to integrate technical and social themes from the literature.
Evidence strength is inversely correlated with intervention complexity.
Cross-domain synthesis reported in the paper that formalises an inverse evidence–complexity relationship based on the reviewed literature. The abstract does not quantify the correlation or list the domains/intervention types used to derive it.
Per-capita elderly care costs running 3–5 times those of working-age cohorts.
Cost comparisons reported in sources included in the 81-paper review. The abstract reports a 3–5x multiple but does not specify which cost categories, countries, or methodological adjustments were used.