Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
The framework situates itself at the intersection of neurophenomenology, computational phenomenology, brain–computer interfaces, and human–AI teaming research.
Cross-disciplinary literature synthesis and conceptual mapping in the paper; descriptive claim with no empirical sampling (N/A).
The paper introduces symbolic operators—Chronons, Hexachronons, Metachronos—as theoretical units intended to bridge first-person phenomenology of temporal experience with third‑person neurotechnology descriptions.
Theoretical proposal and definitional introduction within the paper (conceptual development); no experimental validation or sample (N/A).
XChronos is a philosophical-epistemological framework arguing that transhumanism must place subjective temporality (lived time, presence, attention, meaning) at the center of design and evaluation.
Conceptual/philosophical analysis and literature synthesis presented in the paper; no empirical sample or dataset (N/A).
A Random Survival Forest built on curated cancer‑death‑related genes (CDRG‑RSF) achieved the best long‑term prognostic performance among 14 tested ML algorithms for pancreatic cancer, with 3‑ and 5‑year AUCs > 0.7.
Comparison of 14 ML survival algorithms on curated prognostic genes; Random Survival Forest (CDRG‑RSF) reported superior 3‑ and 5‑year AUCs exceeding 0.7 (exact sample sizes/cohort details not provided in summary).
Experimental knockdown of PSME3 reduced proliferation and invasion and increased apoptosis in LUAD cells, implicating the PI3K/AKT/Bcl‑2 pathway as a mediator.
Functional assays (gene knockdown experiments) reported in the PIGRS study showing decreased proliferation/invasion and increased apoptosis after PSME3 knockdown, with pathway analyses implicating PI3K/AKT/Bcl‑2.
Deep neural networks (DNNs) better captured cross‑study differential expression (DEA) signals when predicting miRNA from mRNA than sparse linear models (LASSO); for HIV the cross‑study log2 fold‑change (log2FC) correlation was approximately R ≈ 0.59 for the DNN approach.
Analysis on seven paired viral infection datasets (including WNV and HIV); compared DNNs vs. LASSO for mRNA→miRNA prediction; reported cross‑study log2FC correlation R ≈ 0.59 for HIV for the DNNs. Methods included differential expression signal recovery across studies.
An AI‑powered pipeline (EPheClass) produced a parsimonious saliva microbiome classifier for periodontal disease with AUC = 0.973 using 13 features.
EPheClass pipeline using ensemble ML (kNN, RF, SVM, XGBoost, MLP), centred log‑ratio (CLR) transform and Recursive Feature Elimination (RFE); reported performance AUC = 0.973 for periodontal disease model with 13 features (sample size not specified in summary).
Recommendation: Treat synthetic participants as heuristic tools (supplemental roles) rather than replacements; use hybrid designs, validate against held-out human samples, pre-register synthetic-data usage, and adopt transparency and reproducibility practices (document prompts, model versions, seeds, fine-tuning).
Authors' recommendations drawn from the systematic review of 182 studies and the identified failure modes and risks.
Approximation guarantees are provided that justify scalable allocation rules and heuristics (i.e., provable performance bounds versus optimal).
Analytical approximation-theory results in the paper showing performance ratios/guarantees for specific heuristics relative to the optimal solution.
The paper identifies gaps and recommends that economists conduct randomized evaluations and quasi-experimental studies to estimate causal effects of interventions (hands-on labs, instructor training, compute subsidies) on competencies and earnings.
Policy and research agenda section of the paper arguing for randomized/quasi-experimental methods; no such causal interventions were implemented in this study.
The study conducted a cross-sectional online survey of more than 600 higher-education students and educators from multiple world regions.
Cross-sectional online survey; sample size reported as >600 participants; recruitment targeted a mix of disciplines and institution types; survey mapped to UNESCO 2024 AI competency frameworks.
Standardizing datasets, benchmarks, and evaluation protocols (including real-time metrics and resource/latency measurements) is necessary to improve comparability and deployment relevance.
Surveyed inconsistencies and methodological shortcomings motivate the recommendation for standardization; many papers call for better benchmarks.
Hybrid architectures combining rule-based filters with ML classifiers and ensembles are used to improve detection performance and reduce false positives.
Comparative analysis and examples from the literature where multi-stage or hybrid pipelines are proposed and evaluated.
Econometric and causal-inference tools (difference-in-differences, instrumental variables, randomized encouragement designs) are needed to estimate long-term effects of personalized robot interventions.
Recommended methodological agenda for AI economists in the paper; no applied causal studies presented.
Research and deployment will require new datasets: longitudinal multimodal interaction logs, user preference surveys, simulated user populations, and ethically annotated datasets for fairness and safety evaluation.
Data & Methods recommendations based on identified empirical needs; no dataset release or analysis in this paper.
Measuring welfare impact of personalized robots requires going beyond engagement to include non-market outcomes such as well-being, autonomy, and mental health.
Methodological recommendation in the implications and evaluation sections; no empirical measures provided.
A/B testing and longitudinal field studies are necessary for real-world validation of robot personalization, and metrics should include welfare-oriented outcomes (well-being, trust) in addition to engagement.
Recommended evaluation strategy drawing from HRI and RS experimental standards; no field trials reported in this work.
Prior to live trials, offline RS evaluation metrics (precision/recall, NDCG), counterfactual/off-policy estimators, and simulated users should be used to validate personalization policies.
Methodological recommendation based on RS evaluation practices; no empirical comparison with live trials in robots presented.
Contextual bandits and counterfactual/off-policy learning can enable safe exploration and off-policy evaluation when adapting robot interactions from logged data.
Methodological synthesis referencing contextual bandit and counterfactual learning techniques from RS and causal inference; no robotic implementation experiments reported.
Sequence-aware recommenders (RNNs, Transformers, Markov/session-based models) are suitable for modeling session dynamics and short-term preference shifts in robot interactions.
Survey of sequence/temporal RS models and their typical use cases; conceptual recommendation only.
RS tooling covers long-term user profiles, short-term/session signals, context-awareness, multi-objective ranking, and evaluation methods suited for personalization at scale.
Review of recommender-systems methods and tooling in the literature; conceptual synthesis without empirical new data.
Recommender systems are specialized in representing, predicting, and ranking user preferences across time and contexts (e.g., collaborative filtering, content-based models, sequential/session models).
Established RS literature surveyed and cited as the basis for the claim; conceptual argument, no new experiments.
Perceived customer value is the core determinant of value-based pricing (VBP) decisions in digital marketing.
Systematic Literature Review (SLR) of 30 scholarly articles (Scopus, 2020–2025) coded into thematic categories; multiple included studies emphasize perceived value as central to pricing decisions.
Breakthroughs in structure prediction arise from end‑to‑end deep models that combine evolutionary information (MSAs, coevolutionary signals), geometric constraints and equivariant architectures, and large‑scale pretraining on sequence databases.
Paper describes methodological components: end‑to‑end architectures using MSAs, SE(3)/E(3)-equivariant layers, transformer‑based pretraining on UniRef/UniProt/metagenomic catalogs; no quantitative ablation studies are provided in the text.
Canada emphasizes teacher-led assessment, cautious regulation, and a focus on equity and professional development in responding to AI-related assessment issues.
Country case study based on Canadian policy documents and secondary sources highlighting teacher-led approaches and regulatory caution; illustrative description.
Algeria’s national approach centers on capacity building and technological independence as central security priorities in its AI strategy.
Analysis of Algeria’s national AI and security documents and related policy texts cited in the comparative case review.
The EU has developed a detailed, rights‑protective regulatory framework that includes procedural safeguards and explicit risk prohibitions for AI.
Qualitative document analysis of EU regulatory acts and strategies (e.g., bloc‑level AI regulatory proposals and legal texts) and comparative literature review.
Practical takeaway: economists should treat consent design as a lever that changes data availability and incorporate consent frictions into demand and production-side models; they should collaborate with HCI and legal scholars to design experiments capturing behavioral and welfare effects.
Recommendation from the workshop summary intended for economists; based on interdisciplinary discussions and agendas rather than tested interventions.
The workshop produced interdisciplinary outputs including personas, prototypes, and a research agenda to better align user capabilities and values with data-driven AI systems.
Documented workshop activities (Futures Design Toolkit, co-design, position papers) and stated expected deliverables in the workshop summary; these are reported outputs rather than evaluated outcomes.
Creators explicitly name advertising, direct sales, affiliate marketing, and revenue-sharing models as common monetization channels for GenAI-enabled content.
Explicit references to these monetization channels appeared repeatedly across the 377 videos and were extracted during thematic coding.
Practical measurement guidance: researchers and practitioners should use repeated sampling (high-frequency and multi-day), compute bootstrap confidence intervals for citation shares and prevalence, run rank-stability analyses, and determine required sample size empirically via pilots.
Methodological recommendations grounded in the paper's empirical findings (non-determinism, heavy tails, wide bootstrap CIs) and demonstrated use of repeated sampling and bootstrap/resampling techniques in the study.
The THETA project provides an interactive, reproducible analysis platform and open-source code (https://github.com/CodeSoul-co/THETA).
Explicit statement and URL in paper; code and platform availability claimed for reproducibility and interactive use.
THETA wraps modeling in an AI Scientist Agent framework (Data Steward, Modeling Analyst, Domain Expert) that simulates grounded-theory judgment and iterative refinement.
Detailed description of a three-role agent workflow in the methods section: Data Steward (ingestion/preprocessing), Modeling Analyst (modeling/hyperparameter tuning), Domain Expert (qualitative assessment/constant comparison).
THETA uses hybrid textual embeddings that combine pretrained foundation-model semantic structure with DAFT adaptations to better capture latent, domain-relevant meanings.
Method description of 'textual hybrid embeddings' combining base foundation encoders and DAFT-tuned parameters; asserted benefit for capturing latent domain meanings (no quantitative ablation reported in summary).
THETA adapts foundation embedding models to domain language using parameter-efficient LoRA fine-tuning (Domain-Adaptive Fine-Tuning, DAFT), avoiding full model retraining.
Method description: LoRA applied to foundation embedding models as the DAFT procedure; claim of parameter-efficient fine-tuning rather than end-to-end retraining (no compute benchmarks in summary).
Over 56% of comments were classified as formulaic, implying patterned, low-information responses dominate agent interaction.
Lexical-structural analysis and pattern detection (embedding/lexical measures) applied to ~2.8M comments; classification operationalized as 'formulaic comments' based on repetitive lexical/structural features, yielding >56% of comments labeled formulaic.
Topics about AI identity, consciousness, and memory comprised 9.7% of topical niches but attracted 20.1% of posting volume, indicating disproportionate attention to introspection.
Topic modeling that identified topical niches and tagged self-referential themes (AI identity, consciousness, memory); comparison of share of topical niches (9.7%) versus share of posting volume (20.1%) in the 23-day Moltbook dataset (47,241 agents; 361,605 posts).
Moltbook activity over 23 days included 47,241 unique agents, 361,605 posts, and ~2.8 million comments.
Full dataset of Moltbook activity collected over a 23-day period; counts of unique agent IDs, posts, and comments as reported in the paper.
Practitioners adopt methodological adaptations — including adaptive/longitudinal designs, versioning/documentation, stratification/moderation analyses, robustness checks, mixed methods, deployment-stage monitoring, and pre-analysis plans — to mitigate validity threats.
Reported mitigation strategies aggregated from the 16 semi-structured interviews and described in the paper's 'Practitioner solutions' section.
A hybrid architecture where cross-domain integrators encapsulate complex subgraphs into well-structured “resource slices” reduces price volatility (approximately 70–75%) without losing throughput.
Ablation experiments comparing baseline decentralised market vs hybrid integrator architecture across simulation configurations (subset of the 1,620 runs, multiple random seeds per configuration). The paper reports ~70–75% reduction in measured price volatility metrics for hybrid vs non-hybrid cases while throughput remained statistically indistinguishable.
A speculative WikiRAT instantiation on Wikipedia illustrates RATs' design and potential uses.
The paper presents WikiRAT as a speculative prototype/illustration; no large-scale deployment or user study of WikiRAT is reported.
RATs record sequences of interaction: traversal (what is read and in what order), association (links and connections the reader forms), and reflection (annotations, notes, time spent), producing inspectable, shareable trajectories.
Design specification within the paper and description of data types RATs would collect (ordered page/navigation logs, hyperlinks followed, time-on-page, annotations, saved excerpts, tags, notes). This is a definitional claim about the proposed system rather than empirical measurement.
A strictly non-reciprocal interaction bias (directional/asymmetric effects between competitors) is necessary to suppress local fluctuations and produce a robust absorbing (permanent monopoly) state.
Theoretical analysis of absorbing states and stability conditions in the model, with supporting numerical simulations comparing symmetric versus non-reciprocal interaction rules (simulation counts unspecified). Results are internal to the model framework.
Early advantage in discovering resources (transient superiority) is governed by extreme-value statistics of first-passage times: rare, fast discoveries determine which population gets early footholds.
Analytic derivation applying extreme-value theory to first-passage times in the paper's stochastic, spatially-structured population model; supported by numerical simulations of stochastic realizations (simulation details unspecified). This is a theoretical/computational result (no empirical data).
Weighted-FSD provides a tunable knob to encode risk aversion/preferences by selecting quantile-weighting functions.
Theoretical correspondence between quantile weights and risk measures (SRMs) described in the paper; conceptual demonstration that different weightings produce different risk profiles.
Introducing quantile-weighted FSD (weighted-FSD) provably controls broad classes of Spectral Risk Measures (SRMs): improving weighted-FSD implies guaranteed improvements in the associated SRM.
Formal theoretical result/proof presented in the paper linking weighted quantile dominance to monotonic improvement in corresponding SRMs.
RAD operationalizes FSD by comparing the learned policy’s empirical rollout cost distribution to a reference policy’s distribution using Optimal Transport (OT) with entropic regularization and Sinkhorn iterations.
Methodological description in the paper: entropically regularized OT objective and Sinkhorn iterations used to compare empirical distributions and produce a differentiable loss.
First-Order Stochastic Dominance (FSD) constraints compare whole cost distributions and directly constrain tails, offering stronger guarantees against high-cost (unsafe) outcomes than expected-cost constraints.
Theoretical property of FSD described in the paper; formal argument that FSD constrains the full distribution (CDF) rather than only its mean.
Explanations must be tailored to stakeholders (clinicians, regulators, customers) and integrated into decision processes to be useful (human-centered design principle).
Thematic coding of design and HCI literature within the review; draws on empirical studies and design guidance recommending stakeholder-specific explanation formats and integration into decision workflows.
The forecasting model was deployed with a human-in-the-loop mechanism that triggers on critical forecast deviations.
Pilot description in the paper documenting integration of H-in-the-loop rules for critical deviations during pilot deployment (single-case deployment evidence).