Evidence (5586 claims)
Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 417 | 113 | 67 | 480 | 1091 |
| Governance & Regulation | 419 | 202 | 124 | 64 | 823 |
| Research Productivity | 261 | 100 | 34 | 303 | 703 |
| Organizational Efficiency | 406 | 96 | 71 | 40 | 616 |
| Technology Adoption Rate | 323 | 128 | 74 | 38 | 568 |
| Firm Productivity | 307 | 38 | 70 | 12 | 432 |
| Output Quality | 260 | 71 | 27 | 29 | 387 |
| AI Safety & Ethics | 118 | 179 | 45 | 24 | 368 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 75 | 37 | 19 | 312 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 74 | 34 | 78 | 9 | 197 |
| Skill Acquisition | 98 | 36 | 40 | 9 | 183 |
| Innovation Output | 121 | 12 | 24 | 13 | 171 |
| Firm Revenue | 98 | 35 | 24 | — | 157 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 87 | 16 | 34 | 7 | 144 |
| Inequality Measures | 25 | 76 | 32 | 5 | 138 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 89 | 7 | 4 | 3 | 103 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 33 | 11 | 7 | 98 |
| Wages & Compensation | 54 | 15 | 20 | 5 | 94 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 27 | 26 | 10 | 6 | 72 |
| Job Displacement | 6 | 39 | 13 | — | 58 |
| Hiring & Recruitment | 40 | 4 | 6 | 3 | 53 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 6 | 9 | — | 27 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
These efficiency and cost gains are achieved while maintaining accuracy parity with the matched hierarchical baseline.
Paper states accuracy parity was maintained in the empirical evaluation comparing the proposed framework to the matched hierarchical baseline on the 2,847-query testbed.
Logistics efficiency does not mediate (fails to fulfill) the anticipated role in transmitting AI's effects to supply chain stability.
Mechanism/mediation tests in the DML analysis on the 45 Chinese listed SEs (2012–2023) indicate no significant mediation via logistics efficiency.
The Photo Big 5 is only weakly correlated with cognitive measures such as test scores.
Correlation/associational analysis between Photo Big 5 trait scores and cognitive measures (e.g., test scores) reported for the MBA graduate sample.
The short‑term effect of AI on labor‑intensive industries is weak.
Short‑run/dynamic subgroup analysis in the China 2003–2017 panel indicating minimal or weak immediate growth effects for labor‑intensive sectors.
The article clarifies theoretical relationships and gaps between Material Passports, Digital Product Passports, and Digital Building Logbooks.
Theoretical analysis and synthesis section of the SLR where the authors compare concepts and identify overlaps and gaps among MPs, DPPs, and DBLs.
Correlation and illustrative regression results confirm the absence of an immediate statistical relationship between AI adoption and productivity at the aggregate level.
Both correlation analysis and an illustrative regression model applied to Eurostat aggregate data for 2021–2024; regression presented as illustrative (not necessarily causal); model specification details and robustness checks not given in the summary.
Labour productivity did not show a stable association with AI diffusion in Slovakia over the analysed period.
Correlation analysis between AI adoption indicators and labour productivity measures for Slovakia using harmonised Eurostat data (2021–2024); detailed coefficient estimates and significance levels not provided in the summary.
The study presents an advanced systematic ranking of I4.0 adoption barriers in the Thai automotive industry.
Paper outputs a ranked list of barriers produced by the integrated Fuzzy BWM-PROMETHEE II-DEMATEL framework; full ranked list and quantitative ranks not included in the supplied summary.
The study explores the influence of AI on HRM practice specifically within top IT companies.
Scope statement in the paper: empirical study involved HR professionals from various (described as top) IT firms. The summary does not supply the list of companies or sampling criteria.
Top management support does not have a direct influence on AI Adoption in the sampled firms.
PLS-SEM results from the 207-firm survey showing a non-significant direct path from top management support to AI Adoption (as reported in the paper).
Effort expectancy does not have a direct influence on AI Adoption in the sampled firms.
PLS-SEM results from the 207-firm survey showing a non-significant direct path from effort expectancy to AI Adoption (as reported in the paper).
This study developed a unified framework that integrates technology acceptance and trust-based perspectives.
Conceptual/methodological claim in the paper: authors report constructing an integrated framework based on literature and their empirical testing.
The paper contributes to both theory and policy by reconceptualizing procurement value and offering an actionable roadmap for embedding ESG principles in public healthcare procurement.
Scholarly contribution claimed via literature synthesis and framework/roadmap creation; contribution is normative and conceptual rather than empirically validated.
We conducted a systematic review and meta-analysis of the literature on AI/HR analytics and organizational decision making, using 85 publications and grounding the work in theories of algorithm-automated decision-making (AST) and matching/hybrid models (STS).
Paper's methods: systematic review and meta-analysis; sample = 85 publications; theoretical framing explicitly stated as AST and STS.
Macroeconomic fiscal moderation remains empirically unvalidated.
Synthesis conclusion from the review noting an absence of empirical evidence that Agentic AI produces macroeconomic fiscal moderation; i.e., no validated studies showing broad fiscal relief effects were identified in the reviewed literature.
Self-generated (model-authored) Skills provide no average benefit.
Comparison of three evaluation conditions (no Skills, curated Skills, self-authored Skills) across SkillsBench. Averaged pass-rate deltas show that model-authored Skills do not increase average pass rate relative to baseline; analysis used 7,308 trajectories over 86 tasks and 7 agent–model configurations.
Empirical evaluation is needed on how AI-induced productivity gains translate into aggregate demand and labor absorption.
Identified research priority in the paper, based on theoretical uncertainty about demand-side labor absorption and lack of conclusive empirical evidence.
AI will not mechanically cause permanent mass unemployment at the aggregate level.
Theoretical framing and synthesis of existing empirical findings across task-based and macro studies; no single new dataset provided (paper draws on literature and conceptual models).
The paper introduces a novel taxonomy that separates patenting into three domains: core AI, traditional robotics, and AI-enhanced robotics.
Methodological contribution of the paper: construction and application of a classification scheme that assigns patent filings (1980–2019) into three domains (core AI, traditional robotics, AI-enhanced robotics). Data source: patent filings 1980–2019 (aggregate counts by domain and country). Exact number of patents not provided in the summary.
The proposed uncertainty measure connects to classical value-of-information concepts, bridging security mechanism analysis and economic theories of information, signaling, and screening.
Analytical comparison and discussion in the paper linking the entropy-style residual uncertainty metric to value-of-information literature (theoretical linkage).
Use of AI raises needs for traceability, explainability, and continuous validation to maintain compliance and avoid error propagation in curricular decisions.
Paper's AI governance recommendations (prescriptive), referencing general AI risk principles rather than empirical study.
There is no accepted integrative digital model that maps measured or perceived value to algorithmic pricing.
Absence of such a model in the SLR sample of 30 articles and thematic coding that identified this gap explicitly.
There is no evidence of nonlinearities in the relationship between digital trade and urban house prices (the effect is linear across the sample).
Explicit tests for nonlinearity reported in the econometric analysis (details of test specification not provided in the summary).
When green-technology innovation is low (below the threshold), the main measurable effect of DE is on improving carbon emission efficiency (CEE), but DE does not yet reduce per capita emissions (PCE).
Results from the threshold-regression models on the 278-city panel (2011–2022) show that in the low-green-innovation regime DE coefficients are significant for CEE but not for PCE; mediating-effect models corroborate the efficiency channel in low-innovation contexts.
Realising DT value requires upfront investment in sensors, integration, standards, and skills; economic viability depends on contract structures and how gains are allocated between investors, owners, contractors, and operators.
Synthesis of cost/benefit discussions and case descriptions in the reviewed literature; policy and procurement examples referenced.
HCI has explored usable consent, but there is no systematic framework for consent in the AI era.
Literature synthesis and gap identification from workshop participants and solicited position papers; no systematic review or meta-analysis with counted studies reported in the summary.
Privacy-leak framing (risk vs ambiguity or privacy-threatening vs neutral) did not change participants' subsequent bargaining behavior with pricing algorithms.
The experiment measured downstream bargaining behavior with algorithms after the adoption/label tasks (N = 610) and reports no detectable effect of the privacy/leak framing on those bargaining outcomes.
Under truthful bidding, the decentralised price-based market matches a centralised value-optimal benchmark (i.e., decentralised allocation equals centralised value-optimal allocation).
Paper presents both a theoretical argument (mechanism properties under quasilinear utilities and discrete slices) and empirical validation in simulation by comparing decentralised outcomes to a centralised value-optimal baseline across configurations in the ablation study.
Experiments used realistic channel and beamforming datasets reflecting varying elevation angles and dynamic LEO link conditions.
Dataset description in the paper states use of realistic channel and beamforming data including varying elevation angles and dynamic links; no dataset size or public dataset identifiers provided in the summary.
There is a need for causal studies (randomized pilots, phased rollouts) to quantify net welfare effects including patient trust, equity, legal risk, and long-run labor impacts.
Authors' recommendation based on gaps identified in the mixed-methods evidence and acknowledged limitations around causal identification and long-term measurement.
Results are robust across alternative AI index specifications, occupational classifications, and standard controls (country and year fixed effects, macroeconomic covariates).
Paper reports robustness checks across different index constructions and occupational taxonomies, with standard controls included in regressions.
Liability for harm from AI remains unresolved; current regulatory frameworks (notably in the EU) continue to emphasize human responsibility and require conformity and clinical validation.
Regulatory and legal analyses, with emphasis on European Union device regulation and liability principles, as reviewed in the paper.
On-Premise RAG matches commercial (cloud) RAG on standard quantitative retrieval and generation metrics.
Empirical comparative analysis using standard retrieval/generation benchmarks comparing three systems (zero-shot baseline, GPT RAG cloud, Open-source On-Prem RAG) under representative SME workloads; specific metric names and sample sizes not reported in the summary.
State-level advances in worker-protective AI measures exist but are uneven and many proposed state bills aimed at strengthening workers’ rights related to AI have stalled.
Review of state legislative proposals and enacted laws as compiled in the commentary (state-level policy scan); no systematic quantitative legislative count or sample reported.
Domain adaptation techniques (transfer learning, fine-tuning on local data) are underutilized in low-resource African contexts despite their potential to improve generalization to local populations and care processes.
Thematic coding of methodological sections across the reviewed literature showed relatively few studies employing transfer learning or local fine-tuning approaches in African or other low-resource settings; evidence comes from counts/qualitative summaries within the literature review rather than a formal meta-analysis.
Research priorities include causal studies on productivity gains from AI, firm‑level adoption dynamics, sectoral labor reallocation, long‑run general equilibrium effects, and heterogeneous impacts across regions and demographic groups.
Set of empirical research recommendations drawn from gaps identified in the literature review and limitations section; not an empirical claim but a prioritized research agenda based on secondary evidence.
Growth‑accounting frameworks and measurement approaches must be updated to capture AI/robotics as intangible and embodied capital, including quality improvements and spillovers.
Methodological argument grounded in literature on measurement challenges and examples of intangible capital; no new measurement exercise or empirical re‑estimation is provided in the paper.
Backtesting the proposed models against historical technological transitions (e.g., ATMs, robotics) and recent AI adoption episodes can validate model performance.
Recommended validation strategy; paper does not report backtest results but prescribes holdout/pseudo‑counterfactual experiments and calibration with administrative outcomes.
Scenario modelling in the reviewed literature typically uses counterfactual simulations with different adoption speeds, policy responses, and initial conditions to bound possible employment, wage, and productivity trajectories.
Description and citations of scenario-modelling practices by think tanks and organisations (TBI, IPPR, IMF) and academic work referenced; evidence is methodological and report-based.
NLP/LLM pipelines are used to extract tasks and skills from free-text job ads and to map those tasks to AI capabilities.
Described methods and citations (Xu et al., 2025; Hampole et al., 2025); evidence is methodological application of transformer-based models to job-ad text in recent studies.
Methods increasingly apply advanced NLP and large language models (BERT, LSTM, GPT-4) to parse job descriptions, map skills/tasks, and predict automation risk.
Cited methodological examples in the paper (Xu et al., 2025; Hampole et al., 2025) and discussion of common pipelines using transformer-based models to extract tasks from free-text job ads and to map tasks to AI capabilities; evidence is methodological and based on recent studies rather than a single benchmarked dataset.
A centralized policy engine for access control, data handling rules, and change management is a necessary control point in the reference pattern.
Prescriptive recommendation in the paper supported by best-practice synthesis and case anecdotes; no direct empirical comparison of centralized vs federated policy engines provided.
Research gaps include the need for standardized evaluation metrics, robustness- and consistency-focused XAI methods, domain-informed explanation frameworks, and longitudinal/clinical impact studies.
Recommendations section of the review synthesizing recurring deficits across papers and proposing priorities.
Recommendation for research and modeling: economic models of AI markets should incorporate institutional regime types (centralized vs decentralized), enforcement uncertainty, and legitimacy effects as parameters affecting data access costs, R&D productivity, and market concentration.
Normative recommendation based on the comparative typology and inferred mechanisms from the document analysis; not empirically validated within the study.
Theoretical contribution: the paper extends modular coordination theory by treating openness–security trade‑offs as layered, adaptive institutional processes embedded in political regimes and 'legitimacy economies.'
Argumentative/theoretical development in the paper grounded in document analysis and literature on coordination and legitimacy.
Providing optional LLM access without training did not increase average exam scores versus no LLM access.
Intent-to-treat comparisons across randomized arms reported in the study: comparison of optional-access-without-training arm to no-access arm showed no average score improvement (sample n = 164).
The benefits of AI-enabled e-commerce and automated warehousing are conditional on complementary policies (competition policy, data governance, workforce reskilling, automation oversight) to manage concentration, privacy, distributional effects, and safety.
Policy-analysis synthesis supported by sensitivity checks in scenario analyses and discussion of governance risks; recommendations informed by observed distributional and market-concentration patterns in the case material.
AI’s net impact on employment to date is modest — no clear evidence of mass unemployment.
Systematic literature review/meta-synthesis of 17 peer‑reviewed publications (published 2020–2025). Aggregate assessment across those studies found no consistent empirical support for large-scale, economy-wide unemployment attributable to AI to date.
The positive effect of AIRC on productivity is mediated through improvements in reproducibility.
Structural equation modeling (SEM) reports mediation through reproducibility metrics in the OECD panel analysis.
The positive effect of AIRC on productivity is mediated through improvements in review efficiency.
Structural equation modeling (SEM) indicates mediation paths from AIRC to productivity via measures of review efficiency in the panel data.