Evidence (5126 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Adoption
Remove filter
Oryx provides Arabic-aware image/video understanding and culturally grounded image generation.
Paper identifies Oryx as the vision component with Arabic-aware understanding and culturally grounded generation; no benchmark metrics are provided in the summary.
Exchanging generative modules (rather than raw data) and enabling modular unlearning improves auditability and aligns better with privacy/regulatory compliance than raw-data sharing.
Argument in the paper that module exchange and deterministic module deletion are more compatible with data sovereignty and regulatory requirements; no formal legal validation or compliance testing reported in the summary.
FederatedFactory enables new economic opportunities (module marketplaces, synthetic-data services) and affects incentives by shifting value toward modular generative assets and orchestration rather than raw centralized datasets.
Conceptual and economic discussion in the paper about potential implications; not based on empirical market data—presented as analysis and hypotheses about economic impact.
The single-round exchange decreases communication rounds and associated coordination/network costs compared to typical iterative federated learning.
Protocol design: single exchange of generative modules vs. typical multi-round weight-aggregation loops in standard FL; paper argues reduced networking/coordination cost. (No quantitative network-cost measurements provided in the summary.)
Investment in data quality and feature engineering yields tangible predictive gains for workforce performance models.
Paper emphasizes use of engineered features capturing engagement dynamics and learning trends and reports better model performance relative to baseline; however, no isolated ablation study quantifying the sole contribution of data-quality investments is reported in the summary.
Tools that improve detection or quantification may reduce downstream costs from missed diagnoses or unnecessary follow-ups, improving cost-effectiveness in some scenarios.
Economic modeling and limited observational analyses that extrapolate diagnostic improvements to downstream resource use; direct empirical cost-effectiveness studies are scarce.
The metacognitive reliability metric can reduce adoption risk for purchasers by providing transparent error-risk assessments and enabling performance-based autonomy thresholds.
Conceptual claim supported by the existence of an empirical confidence metric from the recursive meta-model and discussion of procurement/decision-making implications; not empirically tested with purchasers or procurement outcomes.
HACL/CS supports human trust and situational awareness.
Human factors measured with trust and situational awareness questionnaires in the simulation; summary reports supportive effects on trust and situational awareness but lacks sample-size/statistical detail.
Intelligent turn-level assignment can reduce costly human attention to only high-value moments, improving overall system productivity.
Conceptual implication from the assignment-layer design and empirical trade-offs reported; presented as an advantage in the paper rather than a directly measured economic productivity study.
HADT demonstrates a concrete way to substitute expensive human diagnostic labor with AI assistance while preserving high accuracy, implying reductions in marginal cost per consultation.
Inference drawn in the paper's implications section based on reported reductions in required human effort and maintained diagnostic accuracy (economic claim extrapolating from experimental results; not directly measured as cost in experiments).
Organizational norms and UX influence adoption rates and diffusion of AI: social calibration processes at the team level matter for adoption beyond individual cost–benefit calculations.
Reported by interviewees (N=40) as factors shaping whether and how teams incorporated AI into routines; integrated into theoretical implications for diffusion modeling.
Well-calibrated trust tends to encourage AI being used as a complement to human labor (augmentation), increasing effective productivity; miscalibration (over- or under-trust) can lead to productivity losses.
Inferential claim drawn from interviewees' accounts of when teams appropriately relied on AI (augmentation) versus when inappropriate reliance or avoidance occurred; supported by thematic interpretation rather than quantitative measurement.
Policymakers should support standards for auditability, human‑in‑the‑loop thresholds and training subsidies to reduce coordination failures and make the social benefits of AI adoption more widely shared.
Normative policy recommendation derived from the paper’s analysis of risks, governance needs and distributional concerns; not empirically validated within the paper.
Organisations will invest more in training for AI‑related sensemaking, trust calibration and governance competencies; returns to such training should be evaluated relative to investments in model quality.
Prescriptive inference from the framework and human‑capital theory; supported by referenced literature but not empirically tested in this paper.
Explicit comparative‑advantage allocation will shift the composition of tasks across humans and AI, altering demand for routine versus non‑routine skills and potentially increasing demand for high‑level judgement, oversight and sensemaking skills.
Projected labour‑market implication based on theoretical reasoning and prior literature on task‑based skill demand; not empirically estimated in the paper.
Operationalising the four symbiarchic practices through updated HR systems lets firms capture AI‑enabled productivity gains without eroding trust, ethics or employee well‑being.
Normative claim based on theoretical synthesis and managerial prescription; no empirical testing or field data presented in the paper.
Public data sharing, reproducibility standards, and shared benchmarks could raise the floor of AI utility across the industry.
Policy implication grounded in arguments about data quality, coverage, and generalizability from the narrative review; speculative recommendation rather than evidence-backed empirical claim.
There is potential for consolidation as firms acquire data, talent, or validated AI-driven assets.
Industry-structure implication drawn from economics of complementary assets and observed M&A activity patterns; presented as a likely trend rather than demonstrated empirically in the paper.
AI startups that demonstrate validated, reproducible wet-lab outcomes and access to high-quality data are more likely to command premium valuations.
Argument from observed market behavior and economics of complementary assets presented in the narrative; no systematic valuation analysis included.
Investors should recalibrate expectations: greater value accrues to firms that integrate AI with experimental pipelines and proprietary data assets rather than firms that only possess AI capability.
Economics-focused implications drawn from thematic analysis of heterogeneity in firm outcomes and integration requirements; market-practice inference rather than empirical valuation study.
AI tools complement sensory expertise and design thinking, shifting skill demand toward interdisciplinary competencies (e.g., computational rheology, psychophysics, cultural analytics).
Reasoned inference from technology literature and skill-complementarity theory; literature synthesis but no labor-market empirical analysis provided.
The paper provides a Differentiated Path reference for Emerging Economies to cope with Technological Nationalism.
Claim about the paper's contribution; based on authors' proposed policy framework and recommendations derived from literature review and theoretical analysis; not empirically validated for emerging economies in the excerpt.
The reduction of the AI Model Performance Gap between China and the United States to single digits highlights the new trend of Technology Competition.
Empirical/observational claim stated in the paper; no information in the excerpt about the benchmark metric used for model performance, measurement methodology, time frame, or data sources; 'single digits' not numerically specified.
By integrating psychological trust factors with cognitive capability optimisation, this model offers actionable insights for knowledge management practitioners implementing AI‑augmented decision systems while advancing theoretical understanding of human–AI collaboration effectiveness.
Integrative theoretical claim based on combining constructs from psychological trust research and cognitive/capability literature via systematic synthesis; no empirical evaluation reported in the abstract.
The framework provides practical guidance for executives designing human–AI teams, developing trust calibration training, and establishing performance metrics.
Prescriptive recommendations derived from the proposed model and literature synthesis; the abstract does not report empirical testing of the recommended interventions or their effects.
Supportive regulatory frameworks and digital infrastructure development are important for leveraging AI technologies to improve global trade efficiency.
Study recommendation derived from empirical findings and discussion; this is a policy implication rather than a directly tested empirical claim (no policy evaluation data provided in the summary).
The study provides empirical support for digital transformation theories within financial intermediation.
Authors interpret quantitative results as empirical evidence consistent with digital transformation theories; specific theoretical tests, model fit statistics, and sample information are not included in the summary.
AI-enhanced compliance systems increased regulatory transparency.
Study reports improvements in regulatory transparency as part of operational efficiency gains attributed to AI-driven compliance systems in the quantitative analysis; precise transparency metrics and sample details not provided.
The system demonstrates 100% alignment with GAAP/IFRS regulatory compliance.
Reported regulatory compliance assessment or stakeholder validation claiming full alignment with GAAP/IFRS. (Summary lacks details on the compliance assessment method, criteria, or independent verification; sample/coverage not specified.)
AI has increased the accuracy of patient selection to 80–90%.
Stated performance range for AI-enabled patient selection in the review. The excerpt does not specify the datasets, evaluation metrics (e.g., accuracy vs. AUC), clinical contexts, or sample sizes used to obtain these numbers.
AI-driven ESG analytics strengthened the financial relevance of sustainability integration and supported better-informed investment decision-making.
Study conclusion synthesizing empirical findings (portfolio outperformance and regression results). This is a normative/concluding statement rather than a directly measured outcome; the summary does not quantify decision-making improvements or measure investor behavior.
AI improved the informational efficiency of ESG assessment by capturing more accurate, forward-looking sustainability risks and opportunities.
Interpretation based on the study's empirical portfolio and regression results (better returns, risk metrics, and stronger associations). The claim is inferential; the summary does not report a direct, separate test of 'informational efficiency' or measures of forecast accuracy.
The study contributes to the theoretical advancement of smart supply chain ecosystem frameworks and provides practical insights for organizations seeking sustainable competitive advantage.
Author-stated contribution based on the study's empirical findings and interpretation; this is a scholarly contribution claim rather than a directly measured empirical outcome.
Ecosystem-level integration, governance mechanisms, and workforce readiness are important for maximizing AI-driven transformation in supply chains.
Findings and practical recommendations drawn from the quantitative study and its interpretation; basis appears to be observed associations in the survey data plus authors' discussion—specific empirical tests for governance/workforce readiness effects are not described in the provided text.
The study's implications include policy recommendations to foster responsible AI adoption and data utilization to mitigate economic risks.
Authors extend findings to policy recommendations in the discussion/conclusion of the paper (no specific policy proposals or evaluative evidence provided in the summary).
The research produced a practical framework to guide businesses in effectively leveraging AI and Big Data to navigate market volatility.
The paper's culmination is described as a practical framework derived from its mixed-methods findings (the summary does not provide the framework's components or empirical validation).
The research provides a replicable framework for identifying structural vulnerabilities and designing position-based interventions in construction supply chains.
Authors claim a replicable network-theoretic framework combining interview-based network construction, thematic coding, and centrality analysis to identify vulnerabilities and inform interventions; actual external replication not demonstrated in the paper (per abstract).
Cultural, structural, and decision-making elements co-evolve through recursive feedback loops in human–AI collaboration, advancing process-theoretical understandings of such collaboration.
Analytic interpretation of interview data indicating recursive feedback between cultural norms, structures, and decision routines in AI-integrated startups; presented as an advance to process theory (qualitative evidence; no quantitative test reported).
The study introduces 'hybrid decision architectures' as a dual-level construct that explains how AI triggers systematic organizational change in startups.
Conceptual/theoretical contribution based on synthesis of qualitative interview findings and process-theoretical reasoning (theoretical claim supported by interview data; empirical generalizability not established in excerpt).
The study provides actionable insights for managers and policymakers in resource-limited economies regarding factors that influence whether AI adoption translates into performance gains.
Implication derived from empirical results (n=280, PLS-SEM) showing positive main effects of AI adoption and significant moderating roles for financial and technical strengths.
Firms compensate for institutional weaknesses through adaptive and informal mechanisms, allowing AI adoption to yield performance gains despite weak institutions.
Interpretive inference drawn from the non-significant institutional moderation effect in the PLS-SEM and theoretical reasoning (Resource-Based View, Contingency Theory, Institutional Theory); not directly measured as a distinct empirical construct in the reported analysis.
Adopting a DARE-inspired approach is not merely a policy option but a societal imperative for aligning technological advancement with the public good.
Normative conclusion asserted in abstract; no empirical validation or stakeholder analysis described in the abstract.
The Philippines has a narrow but real window of opportunity to steer AI adoption toward inclusive upgrading rather than disruptive adjustment.
Synthesis of observed cautious adoption patterns, occupational exposure/complementarity results, and scenario timelines (2025–2035) presented in the paper.
AI would have operated as a cognitive and organizational stabilizer in past industrial contexts, reducing inefficiencies and reinforcing the firm's capacity to adapt, coordinate, and perform.
Interpretation of overall simulation results showing reductions in inefficiencies and improvements across multiple performance measures in the counterfactual AI-HRM scenarios.
AI could optimize coordination between human and technological resources, improving operational coordination.
Model includes workforce allocation and coordination-related variables and uses regression-based simulations to project coordination improvements under AI-driven HR processes.
AI could reduce information asymmetries in performance evaluation.
The paper posits mechanisms and encodes performance-evaluation indicators in the counterfactual model; simulations indicate reduced evaluation-related asymmetries under AI-HRM. (Evidence is model-based; direct empirical measurement of information asymmetry reduction not detailed.)
AI could enhance precision in staffing decisions and improve skill–task matching.
Model specification includes staffing and workforce-allocation variables; simulations portray improved staffing precision and skill–task alignment when HR processes are AI-supported. (This is primarily inferred from modeled mechanisms rather than direct experimental manipulation.)
Because social protection intrinsically aims to increase equity, there may be an implicit mandate to prioritize women and girls.
Normative/argumentative claim in the introduction linking the equity aims of social protection to a policy implication; no empirical method or data cited in the excerpt.
The paper concludes there is a need for inclusive, transparent, and ethically grounded AI governance capable of balancing innovation, accountability, and human security.
Normative recommendation emerging from the paper's analysis and review of governance paradigms and multilateral initiatives; not empirically tested within the study.
The study contributes to research emphasizing the importance of prompt design in AI governance, multi-agent coordination, and autonomous system reliability.
Stated contribution based on the experimental results and discussion sections; framed as adding to existing literature rather than a discrete empirical finding. (Contribution scope and bibliometric support not provided in the excerpt.)