Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Technology companies, service providers, and civil society share responsibility for protecting children online, but current measures by these actors are insufficient.
Argument in the book summary based on evaluation of stakeholder roles; likely supported by case studies or policy analysis in the full text, but no specific methods, cases, or sample sizes are provided in the excerpt.
Current regulations fall short in effectively protecting children in an evolving digital landscape; there are persistent gaps and a growing need for internationally coordinated approaches.
Conclusion presented in the book's comparative legal analysis; implies review of EU (and US) legal frameworks and identification of gaps, but the excerpt does not list the analytical method, jurisdictions reviewed in detail, or specific legal provisions examined.
Europe has emerged as a major hub for hosting child sexual abuse material (CSAM), including newer forms such as deepfake abuse content and AI-generated 'DeepNudes.'
Asserted in the summary; would be supported by law-enforcement takedown data, hosting statistics, or forensic analyses of seized material, but the excerpt provides no specific datasets, agencies, or sample sizes.
Violations of privacy, exposure to disturbing content, unwanted sexual approaches, and cyberbullying are becoming more common.
Trend claim made in the book summary; would be supported by longitudinal or comparative prevalence data on online harms, but no specific studies, methods, or sample sizes are cited in the provided text.
Nearly one in three reports feeling unsafe.
Specific prevalence statement included in the summary; implies self-report survey data on perceived safety among youth, but the excerpt does not identify the survey instrument, population, timeframe, or sample size.
Psychological barriers — specifically algorithm aversion, AI-induced job insecurity, technostress, and diminished occupational identity — impede effective AI integration across U.S. industries.
Literature synthesis of empirical and theoretical work in AI–HRM and organizational psychology cited in the paper (summary does not report primary-study sample sizes).
Workforce psychological readiness, rather than technological capability alone, constitutes the critical bottleneck in organizational AI adoption.
Synthesis of emerging empirical AI–HRM research and theoretical integration (paper reports 'findings' from this synthesis; no primary-sample-size details provided in the summary).
The integration of AI into U.S. workplaces represents a profound organizational psychology challenge that extends well beyond mere technology adoption.
Conceptual/theoretical argument based on literature synthesis; draws on established theories (Technology Acceptance Model, Human–AI Symbiosis Theory, Job Demands–Resources Model, Organizational Trust Theory) and cited empirical AI–HRM studies (no specific sample sizes or primary data reported in the summary).
What remains needed is rigorous advice to policymakers concerned about rapid increases in labor churn, scientific development, labor–capital shifts, or existential risk.
Normative conclusion drawn by the author from gaps identified in the seven-book review (qualitative assessment of unmet policy-relevant analysis); sample = 7 books.
The reviewed works offer little guidance regarding the transformative scenarios considered plausible by many AI researchers.
Author's evaluative judgment based on the content and emphases of the seven books (qualitative gap analysis); sample = 7 books.
There are significant implementation challenges for Material Passports, particularly for existing buildings.
Aggregate findings from included studies highlighting technical, data-collection, legacy-information, and workflow barriers when applying MPs to existing building stock.
Circular economy (CE) adoption in the Architecture, Engineering, and Construction (AEC) industry is hampered by data scarcity.
Synthesis of included literature and authors' framing in the introduction and analysis sections indicating repeated identification of data scarcity as a barrier to CE adoption in AEC.
Selection of a human-LLM archetype brings important risks and considerations for the designers of human-AI decision-making systems.
Analytic discussion and synthesis of evaluation results and literature review; tradeoffs surfaced in the paper (e.g., decision control, social hierarchies, cognitive forcing strategies, information requirements).
The stability and patience that define long-term investors can breed strategic inertia.
Introductory assertion in the paper (conceptual observation). The paper does not present empirical data or sample analysis to substantiate this causal claim in the provided excerpt.
Conventional thinking often frames AI uncritically as just a tool for efficiency, which is a narrow perspective that overlooks AI's transformative role.
Critical/theoretical argument presented in the paper (conceptual observation). No empirical data, sample, or statistical analysis reported to support this claim.
Across survey and experimental evidence, perceptions that AI will replace labor—regardless of actual labor-market outcomes—may decrease democratic legitimacy and public engagement in shaping AI's future.
Synthesis of correlational findings from the large European survey (N = 37,079) and causal evidence from two preregistered experiments (UK N = 1,202; US N = 1,200).
Controlling for technology-related, political, and sociodemographic factors, perceiving AI as labor-replacing (vs. labor-creating) is associated with lower political engagement with technology.
Multivariable regression analyses on the large European survey (N = 37,079) with controls for technology-related, political, and sociodemographic factors.
Controlling for technology-related, political, and sociodemographic factors, perceiving AI as labor-replacing (vs. labor-creating) is associated with lower satisfaction with democracy.
Multivariable regression analyses on the same large survey (N = 37,079) including controls for technology-related attitudes, political variables, and sociodemographic covariates.
There are ethical concerns surrounding AI and automation including algorithmic decision-making, workforce exclusion, and inequality in access to reskilling opportunities.
Raised as an ethical analysis within the paper's conceptual framework; no empirical study, surveys, or quantified measures of these ethical issues are reported in this paper.
AI is eliminating repeated (routine) jobs.
Stated as part of the paper's argument about AI's dual impact; supported by conceptual analysis rather than new empirical evidence in this manuscript (no sample size or empirical method reported).
Artificial intelligence and automation are reshaping jobs, transforming them from a steady source of income to a dynamic process highly influenced by technology, flexibility, and uncertainty.
Central analytical claim made in the paper based on conceptual reasoning; the paper does not report empirical measures, datasets, or sample sizes to support the transformation quantitatively.
AI and automation pose significant challenges to employment stability, skill relevance, and human dignity.
Claim presented within the paper's conceptual and analytical discussion of AI's dual impacts; no empirical study, sample size, or quantitative measures provided in this paper.
Jurisdictions that implemented employee classification requirements experienced an 18% reduction in platform labor supply.
Comparative policy analysis across jurisdictions within the 24-country dataset comparing platform labor supply before and after employee-classification reforms using administrative and platform transaction records.
Median gig-worker hourly pay ($14.20) is approximately 22% below comparable traditional employment wages.
Comparison of adjusted median hourly gig earnings (platform records) to comparable hourly wages in traditional employment from labor force and administrative wage data for the same populations across the 24 countries.
Governance quality becomes negative and statistically significant at the 0.90 quantile (τ = 0.90), which the paper interprets as evidence of institutional rigidity in advanced financial systems.
MMQR results showing a negative, significant coefficient for governance quality at τ = 0.90; interpretation provided by the authors linking this sign to institutional rigidity.
AI use also poses risks, including systemic discrimination, privacy invasion, and commodification of talent.
Qualitative synthesis and documented instances in the reviewed literature (n=85) reporting discriminatory outcomes, privacy concerns, and labor commodification effects associated with algorithmic HR tools.
Qualitative synthesis reveals a 'gray zone' in labor relations and a 'black box' in algorithmic data processing, both exposing businesses to procedural injustice risks.
Thematic/qualitative synthesis of findings from the reviewed literature (n=85) highlighting issues of labor relations and algorithmic opacity leading to procedural fairness concerns.
Digital transformation raises challenges related to privacy, inequality, and regulatory scrutiny.
Identified as a key challenge in the paper; the abstract provides no details on how privacy concerns, inequality measures, or regulatory incidents were documented or quantified.
We lack frameworks for articulating how cultural outputs might be actively beneficial.
Authors' identification of a gap in evaluation theory and practice (conceptual analysis); no systematic literature review details provided in the excerpt.
Current AI evaluation practices show a critical asymmetry: while AI assessments rigorously measure both benefits and harms of intelligence, they focus almost exclusively on cultural harms.
Authors' review/ critique of existing evaluation frameworks and metrics (qualitative analysis in the paper); the excerpt does not list the reviewed studies or their number.
The field of AI is unprepared to measure or respond to how the proliferation of entertaining AI-generated content will impact society.
Authors' assessment of current evaluation practices and frameworks (qualitative analysis presented in the paper); no empirical metrics or sample sizes provided in the excerpt.
Interpreting the literature through a socio-technical lens reveals a persistent misalignment between GenAI's fast-evolving technical subsystem and the slower-adapting social subsystem.
Authors' conceptual interpretation of the reviewed studies (28 papers) using socio-technical theory to integrate technical and social themes from the literature.
Evidence strength is inversely correlated with intervention complexity.
Cross-domain synthesis reported in the paper that formalises an inverse evidence–complexity relationship based on the reviewed literature. The abstract does not quantify the correlation or list the domains/intervention types used to derive it.
Per-capita elderly care costs running 3–5 times those of working-age cohorts.
Cost comparisons reported in sources included in the 81-paper review. The abstract reports a 3–5x multiple but does not specify which cost categories, countries, or methodological adjustments were used.
Conventional policy instruments have failed to resolve pressures that include severe long-term care workforce shortfalls across leading ageing economies.
Synthesis of findings from the structured narrative review of 81 sources (2020–2025) indicating persistent workforce shortfalls. The abstract does not provide quantitative workforce shortfall magnitudes or country-specific data.
Demographic ageing is projected to reduce annual GDP growth by 0.3–1.2 percentage points by 2035.
Projection estimates referenced in the review literature (2020–2025). The abstract reports the 0.3–1.2 p.p. range but does not specify which models or studies generated these projections.
Ageing-related expenditure already absorbs up to 18% of GDP in the most affected economies.
Spending estimates drawn from the reviewed literature (2020–2025). The paper states 'up to 18% of GDP' for the most affected economies but does not list which economies or the original data sources in the abstract.
Advanced economies face a compounding demographic crisis: populations aged 65 and over will reach 30–40% in several nations by 2050.
Demographic projection claims cited in the paper's background literature (sources from the structured narrative review). No specific datasets or country-by-country breakdown provided in the abstract.
Current literature has primarily focused on automation-based views of decision support and lacks insight into systematic human–AI coordination aided by analytics.
Literature review and conceptual critique within the paper. No systematic mapping study or bibliometric counts reported.
Most organizations have difficulties converting algorithmic results into sustainable managerial decisions due to low levels of trust, lack of explanation, and poor integration between AI systems and human judgment.
Synthesis of existing literature presented in the conceptual paper (literature review). No empirical study or sample provided to quantify 'most organizations.'
AI adoption has augmented complexity, uncertainty in decision-making, and accountability stresses for managers.
Claim supported by conceptual argument and literature integration (qualitative synthesis). No empirical sample size or quantitative testing reported.
Traditional methods for assessing and developing employees' skills often fail to provide real-time feedback.
Statement supported by literature review cited by the authors; the abstract does not provide empirical comparisons, metrics, or sample sizes.
Existing research on AI-driven decision-making remains fragmented and often framed through substitution-oriented narratives that position AI as a replacement for human judgment.
Assessment based on the author's interdisciplinary literature synthesis (conceptual meta-analysis); descriptive evaluation of research framing rather than new empirical testing.
Skills mismatch and SME adoption constraints constitute a binding bottleneck for inclusive digital–green upgrading.
Synthesis of studies on skills, firm capabilities, and SME adoption of digital and green technologies (review-level evidence; no single dataset or sample size provided).
Absent complementary institutions and infrastructure, digitalization may increase electricity demand, widen inequality, and incentivize strategic disclosure (greenwashing).
Literature review drawing on empirical studies of energy consumption from digital systems, labor-market studies, and analyses of ESG disclosure practices (review-level synthesis; no single sample size reported).
The review identifies highly heterogeneous modeling approaches with limited convergence toward shared benchmark tasks.
Comparative assessment across the 42 studies indicating a wide variety of modeling choices and an absence of commonly adopted benchmark tasks for direct comparison.
The literature reveals constraints, including challenges in processing long financial documents, limited availability of labeled datasets, and strong geographic and linguistic concentration.
Synthesis of methodological limitations and practical constraints reported across the reviewed studies (issues repeatedly mentioned in the corpus of 42 studies).
Embedding-based representations and end-to-end deep learning architectures appear only sporadically.
Review observations that only a small subset of the 42 studies used embedding representations or end-to-end deep learning models, i.e., these approaches are uncommon in the sample.
Less attention has been given to how sentiment-based textual features obtained from corporate reports are integrated into machine learning pipelines to predict firms' financial outcomes.
Synthesis from the systematic review of 42 studies indicating relatively few studies use corporate report–derived sentiment or explicitly address integration of such textual features into ML pipelines for firm-level financial predictions.
AI causes job loss due to the automation of repetitive tasks.
Narrative literature review and synthesis of recent economic studies presented in the paper; no original empirical sample or primary data collection reported.