Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
The paper introduces symbolic operators—Chronons, Hexachronons, Metachronos—as theoretical units intended to bridge first-person phenomenology of temporal experience with third‑person neurotechnology descriptions.
Theoretical proposal and definitional introduction within the paper (conceptual development); no experimental validation or sample (N/A).
XChronos is a philosophical-epistemological framework arguing that transhumanism must place subjective temporality (lived time, presence, attention, meaning) at the center of design and evaluation.
Conceptual/philosophical analysis and literature synthesis presented in the paper; no empirical sample or dataset (N/A).
A Random Survival Forest built on curated cancer‑death‑related genes (CDRG‑RSF) achieved the best long‑term prognostic performance among 14 tested ML algorithms for pancreatic cancer, with 3‑ and 5‑year AUCs > 0.7.
Comparison of 14 ML survival algorithms on curated prognostic genes; Random Survival Forest (CDRG‑RSF) reported superior 3‑ and 5‑year AUCs exceeding 0.7 (exact sample sizes/cohort details not provided in summary).
Experimental knockdown of PSME3 reduced proliferation and invasion and increased apoptosis in LUAD cells, implicating the PI3K/AKT/Bcl‑2 pathway as a mediator.
Functional assays (gene knockdown experiments) reported in the PIGRS study showing decreased proliferation/invasion and increased apoptosis after PSME3 knockdown, with pathway analyses implicating PI3K/AKT/Bcl‑2.
Deep neural networks (DNNs) better captured cross‑study differential expression (DEA) signals when predicting miRNA from mRNA than sparse linear models (LASSO); for HIV the cross‑study log2 fold‑change (log2FC) correlation was approximately R ≈ 0.59 for the DNN approach.
Analysis on seven paired viral infection datasets (including WNV and HIV); compared DNNs vs. LASSO for mRNA→miRNA prediction; reported cross‑study log2FC correlation R ≈ 0.59 for HIV for the DNNs. Methods included differential expression signal recovery across studies.
An AI‑powered pipeline (EPheClass) produced a parsimonious saliva microbiome classifier for periodontal disease with AUC = 0.973 using 13 features.
EPheClass pipeline using ensemble ML (kNN, RF, SVM, XGBoost, MLP), centred log‑ratio (CLR) transform and Recursive Feature Elimination (RFE); reported performance AUC = 0.973 for periodontal disease model with 13 features (sample size not specified in summary).
Approximation guarantees are provided that justify scalable allocation rules and heuristics (i.e., provable performance bounds versus optimal).
Analytical approximation-theory results in the paper showing performance ratios/guarantees for specific heuristics relative to the optimal solution.
About 78% of the included studies document productivity increases related to digital transformation initiatives.
Quantitative summary across the 145 included studies indicating the proportion reporting productivity gains (~78%).
A systematic review of 145 empirical studies (published 2020–2025) finds a consistent positive association between digital transformation and work productivity.
Systematic review following PRISMA 2020 of 145 included empirical studies identified and screened from searches (see Methods); inclusion period 2020–2025; productivity outcomes extracted from each study.
The paper identifies gaps and recommends that economists conduct randomized evaluations and quasi-experimental studies to estimate causal effects of interventions (hands-on labs, instructor training, compute subsidies) on competencies and earnings.
Policy and research agenda section of the paper arguing for randomized/quasi-experimental methods; no such causal interventions were implemented in this study.
The study conducted a cross-sectional online survey of more than 600 higher-education students and educators from multiple world regions.
Cross-sectional online survey; sample size reported as >600 participants; recruitment targeted a mix of disciplines and institution types; survey mapped to UNESCO 2024 AI competency frameworks.
Core supply‑chain management challenges targeted by simulation are production layout, product strategy, and managing volume and variety.
Survey and critique of simulation applications presented in the paper; conceptual taxonomy of application areas.
The paper proposes a 'manufacturing operation tree'—an organizationally structured framework—to guide development of more realistic, validated, and industry‑relevant simulation models.
Conceptual/modeling output in the paper (diagram and explanation of the manufacturing operation tree); theoretical development rather than empirical testing.
Standardizing datasets, benchmarks, and evaluation protocols (including real-time metrics and resource/latency measurements) is necessary to improve comparability and deployment relevance.
Surveyed inconsistencies and methodological shortcomings motivate the recommendation for standardization; many papers call for better benchmarks.
Hybrid architectures combining rule-based filters with ML classifiers and ensembles are used to improve detection performance and reduce false positives.
Comparative analysis and examples from the literature where multi-stage or hybrid pipelines are proposed and evaluated.
Econometric and causal-inference tools (difference-in-differences, instrumental variables, randomized encouragement designs) are needed to estimate long-term effects of personalized robot interventions.
Recommended methodological agenda for AI economists in the paper; no applied causal studies presented.
Research and deployment will require new datasets: longitudinal multimodal interaction logs, user preference surveys, simulated user populations, and ethically annotated datasets for fairness and safety evaluation.
Data & Methods recommendations based on identified empirical needs; no dataset release or analysis in this paper.
Measuring welfare impact of personalized robots requires going beyond engagement to include non-market outcomes such as well-being, autonomy, and mental health.
Methodological recommendation in the implications and evaluation sections; no empirical measures provided.
A/B testing and longitudinal field studies are necessary for real-world validation of robot personalization, and metrics should include welfare-oriented outcomes (well-being, trust) in addition to engagement.
Recommended evaluation strategy drawing from HRI and RS experimental standards; no field trials reported in this work.
Prior to live trials, offline RS evaluation metrics (precision/recall, NDCG), counterfactual/off-policy estimators, and simulated users should be used to validate personalization policies.
Methodological recommendation based on RS evaluation practices; no empirical comparison with live trials in robots presented.
Contextual bandits and counterfactual/off-policy learning can enable safe exploration and off-policy evaluation when adapting robot interactions from logged data.
Methodological synthesis referencing contextual bandit and counterfactual learning techniques from RS and causal inference; no robotic implementation experiments reported.
Sequence-aware recommenders (RNNs, Transformers, Markov/session-based models) are suitable for modeling session dynamics and short-term preference shifts in robot interactions.
Survey of sequence/temporal RS models and their typical use cases; conceptual recommendation only.
RS tooling covers long-term user profiles, short-term/session signals, context-awareness, multi-objective ranking, and evaluation methods suited for personalization at scale.
Review of recommender-systems methods and tooling in the literature; conceptual synthesis without empirical new data.
Recommender systems are specialized in representing, predicting, and ranking user preferences across time and contexts (e.g., collaborative filtering, content-based models, sequential/session models).
Established RS literature surveyed and cited as the basis for the claim; conceptual argument, no new experiments.
Perceived customer value is the core determinant of value-based pricing (VBP) decisions in digital marketing.
Systematic Literature Review (SLR) of 30 scholarly articles (Scopus, 2020–2025) coded into thematic categories; multiple included studies emphasize perceived value as central to pricing decisions.
Digital trade development raises city-level house prices in China in a robust, linear manner.
City-level panel regressions using a constructed digital trade index (entropy-TOPSIS aggregation of multiple indicators). Authors report tests for nonlinearity (none found) and multiple robustness checks. Sample: Chinese cities (years and exact sample size not specified in the summary).
Breakthroughs in structure prediction arise from end‑to‑end deep models that combine evolutionary information (MSAs, coevolutionary signals), geometric constraints and equivariant architectures, and large‑scale pretraining on sequence databases.
Paper describes methodological components: end‑to‑end architectures using MSAs, SE(3)/E(3)-equivariant layers, transformer‑based pretraining on UniRef/UniProt/metagenomic catalogs; no quantitative ablation studies are provided in the text.
Algeria’s national approach centers on capacity building and technological independence as central security priorities in its AI strategy.
Analysis of Algeria’s national AI and security documents and related policy texts cited in the comparative case review.
The EU has developed a detailed, rights‑protective regulatory framework that includes procedural safeguards and explicit risk prohibitions for AI.
Qualitative document analysis of EU regulatory acts and strategies (e.g., bloc‑level AI regulatory proposals and legal texts) and comparative literature review.
Practical takeaway: economists should treat consent design as a lever that changes data availability and incorporate consent frictions into demand and production-side models; they should collaborate with HCI and legal scholars to design experiments capturing behavioral and welfare effects.
Recommendation from the workshop summary intended for economists; based on interdisciplinary discussions and agendas rather than tested interventions.
The workshop produced interdisciplinary outputs including personas, prototypes, and a research agenda to better align user capabilities and values with data-driven AI systems.
Documented workshop activities (Futures Design Toolkit, co-design, position papers) and stated expected deliverables in the workshop summary; these are reported outputs rather than evaluated outcomes.
Creators explicitly name advertising, direct sales, affiliate marketing, and revenue-sharing models as common monetization channels for GenAI-enabled content.
Explicit references to these monetization channels appeared repeatedly across the 377 videos and were extracted during thematic coding.
Practical measurement guidance: researchers and practitioners should use repeated sampling (high-frequency and multi-day), compute bootstrap confidence intervals for citation shares and prevalence, run rank-stability analyses, and determine required sample size empirically via pilots.
Methodological recommendations grounded in the paper's empirical findings (non-determinism, heavy tails, wide bootstrap CIs) and demonstrated use of repeated sampling and bootstrap/resampling techniques in the study.
XAI analyses (e.g., SHAP / feature importance) indicate that forecasted features are among the top contributors to model predictions.
Feature attribution experiments described in the paper using SHAP or similar methods showing high importance scores for TSFM-generated forecasted features in the downstream regression.
The forecasted features produced by a frozen TSFM drive most of the predictive gains.
Ablation studies reported in the paper that remove forecasted features and measure performance degradation, plus XAI analyses (feature importance / SHAP) showing forecasted features rank highly.
A hybrid architecture where cross-domain integrators encapsulate complex subgraphs into well-structured “resource slices” reduces price volatility (approximately 70–75%) without losing throughput.
Ablation experiments comparing baseline decentralised market vs hybrid integrator architecture across simulation configurations (subset of the 1,620 runs, multiple random seeds per configuration). The paper reports ~70–75% reduction in measured price volatility metrics for hybrid vs non-hybrid cases while throughput remained statistically indistinguishable.
Agents detected up to 65% of vulnerabilities in some experimental settings.
Reported detection rate maxima from the study's experiments on certain model/scaffold/task combinations.
The authors constructed a contamination-free dataset of 22 real-world smart-contract security incidents that postdate every evaluated model's release.
Curation procedure described in the methods: 22 incidents selected to occur after all model release dates to prevent leakage.
This study expanded the evaluation matrix to 26 agent configurations spanning four model families and three scaffolding approaches.
Methods reported in this study specifying 26 agent configurations, four model families, and three scaffolds.
EVMbench (OpenAI, Paradigm, OtterSec) reported agents detecting up to 45.6% of vulnerabilities and achieving exploitation on 72.2% of a curated subset.
Reported metrics from the original EVMbench paper/benchmark (as summarized in this study).
The penalized framework induces centroid estimation and dataset-specific shrinkage whose strength is controlled by a penalty parameter, enabling tunable information sharing.
Method formulation in the paper: penalized likelihood with KL term; derivation showing centroid estimated from pooled datasets and penalty parameter governing shrinkage magnitude; discussion of tuning.
The KL-penalized estimators achieve provably lower mean squared error (MSE) than dataset-specific maximum likelihood estimators.
Non-asymptotic and/or asymptotic analyses provided in the paper that compare MSE of KL-penalized estimators to MLEs (mathematical proofs/sketches in theoretical section).
The KL-based shrinkage estimators adapt to the true degree of shared information across datasets (i.e., they automatically perform partial pooling when appropriate).
Theoretical characterization of the estimator's dependence on the penalty strength and centroid, plus simulation studies varying degree/structure of heterogeneity to show adaptive behavior.
A KL-divergence penalty that shrinks dataset-specific distributions toward a learned centroid yields simple closed-form estimators for linear models.
Methodological development in the paper: formulation of a penalized likelihood/objective using KL divergence; algebraic derivations producing closed-form solutions for the centroid and shrunken dataset estimates (closed forms presented in the paper).
The learned adaptive policy outperformed a fixed-wrench baseline by an average of 10.9% across five material setups.
Empirical evaluation: comparison between learned adaptive policy and a fixed-wrench policy on five different material setups; the paper reports an average improvement of ~10.9% (the exact performance metric formulation and per-setup statistics are not provided in the summary).
Integrating AI (notably ML and NLP) meaningfully automates routine software engineering tasks across requirements management, code generation, testing, and maintenance.
Systematic literature review of prior AI-for-SE work combined with an empirical survey of software engineering professionals reporting usage and examples of tool-supported automation; sample size for the survey not specified in the summary.
A speculative WikiRAT instantiation on Wikipedia illustrates RATs' design and potential uses.
The paper presents WikiRAT as a speculative prototype/illustration; no large-scale deployment or user study of WikiRAT is reported.
RATs record sequences of interaction: traversal (what is read and in what order), association (links and connections the reader forms), and reflection (annotations, notes, time spent), producing inspectable, shareable trajectories.
Design specification within the paper and description of data types RATs would collect (ordered page/navigation logs, hyperlinks followed, time-on-page, annotations, saved excerpts, tags, notes). This is a definitional claim about the proposed system rather than empirical measurement.
An autoencoder-based ODE emulator that maps parameter values to latent trajectories can flexibly generate different solution paths conditioned on parameters.
Architecture and experiments: authors present a novel encoder/decoder ODE emulator that learns latent representation of trajectories and maps parameter vectors to latent trajectories; empirical examples provided (details not in summary).
A quantile emulator trained conditional on MCMC parameter draws can produce conditional quantile predictions without training a Bayesian neural network.
Method and empirical demonstration: paper describes and implements a quantile emulator (network trained to predict conditional quantiles across parameter draws).