The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5539 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Adoption Remove filter
The paper introduces symbolic operators—Chronons, Hexachronons, Metachronos—as theoretical units intended to bridge first-person phenomenology of temporal experience with third‑person neurotechnology descriptions.
Theoretical proposal and definitional introduction within the paper (conceptual development); no experimental validation or sample (N/A).
high positive XChronos and Conscious Transhumanism: A Philosophical Framew... existence and conceptual definition of symbolic operators linking phenomenology ...
XChronos is a philosophical-epistemological framework arguing that transhumanism must place subjective temporality (lived time, presence, attention, meaning) at the center of design and evaluation.
Conceptual/philosophical analysis and literature synthesis presented in the paper; no empirical sample or dataset (N/A).
high positive XChronos and Conscious Transhumanism: A Philosophical Framew... degree to which subjective temporality is treated as a central evaluative/design...
A Random Survival Forest built on curated cancer‑death‑related genes (CDRG‑RSF) achieved the best long‑term prognostic performance among 14 tested ML algorithms for pancreatic cancer, with 3‑ and 5‑year AUCs > 0.7.
Comparison of 14 ML survival algorithms on curated prognostic genes; Random Survival Forest (CDRG‑RSF) reported superior 3‑ and 5‑year AUCs exceeding 0.7 (exact sample sizes/cohort details not provided in summary).
high positive Editorial: Integrating machine learning and AI in biological... 3‑ and 5‑year survival AUC (prognostic accuracy)
Experimental knockdown of PSME3 reduced proliferation and invasion and increased apoptosis in LUAD cells, implicating the PI3K/AKT/Bcl‑2 pathway as a mediator.
Functional assays (gene knockdown experiments) reported in the PIGRS study showing decreased proliferation/invasion and increased apoptosis after PSME3 knockdown, with pathway analyses implicating PI3K/AKT/Bcl‑2.
high positive Editorial: Integrating machine learning and AI in biological... Cell proliferation, invasion, apoptosis; downstream pathway activity (PI3K/AKT/B...
Deep neural networks (DNNs) better captured cross‑study differential expression (DEA) signals when predicting miRNA from mRNA than sparse linear models (LASSO); for HIV the cross‑study log2 fold‑change (log2FC) correlation was approximately R ≈ 0.59 for the DNN approach.
Analysis on seven paired viral infection datasets (including WNV and HIV); compared DNNs vs. LASSO for mRNA→miRNA prediction; reported cross‑study log2FC correlation R ≈ 0.59 for HIV for the DNNs. Methods included differential expression signal recovery across studies.
high positive Editorial: Integrating machine learning and AI in biological... Cross‑study correlation of predicted vs observed log2FC (DEA signal recovery)
An AI‑powered pipeline (EPheClass) produced a parsimonious saliva microbiome classifier for periodontal disease with AUC = 0.973 using 13 features.
EPheClass pipeline using ensemble ML (kNN, RF, SVM, XGBoost, MLP), centred log‑ratio (CLR) transform and Recursive Feature Elimination (RFE); reported performance AUC = 0.973 for periodontal disease model with 13 features (sample size not specified in summary).
high positive Editorial: Integrating machine learning and AI in biological... Classification AUC for periodontal disease (saliva)
Approximation guarantees are provided that justify scalable allocation rules and heuristics (i.e., provable performance bounds versus optimal).
Analytical approximation-theory results in the paper showing performance ratios/guarantees for specific heuristics relative to the optimal solution.
high positive Evaluating Synthetic Cyber Deception Strategies Under Uncert... approximation ratio / performance gap between heuristic allocation and optimal d...
About 78% of the included studies document productivity increases related to digital transformation initiatives.
Quantitative summary across the 145 included studies indicating the proportion reporting productivity gains (~78%).
high positive Digital transformation and its relationship with work produc... productivity gains (as reported by each study: individual, team, or firm-level p...
A systematic review of 145 empirical studies (published 2020–2025) finds a consistent positive association between digital transformation and work productivity.
Systematic review following PRISMA 2020 of 145 included empirical studies identified and screened from searches (see Methods); inclusion period 2020–2025; productivity outcomes extracted from each study.
high positive Digital transformation and its relationship with work produc... work productivity (individual and organizational productivity indicators)
The paper identifies gaps and recommends that economists conduct randomized evaluations and quasi-experimental studies to estimate causal effects of interventions (hands-on labs, instructor training, compute subsidies) on competencies and earnings.
Policy and research agenda section of the paper arguing for randomized/quasi-experimental methods; no such causal interventions were implemented in this study.
high positive Exploring Student and Educator Challenges in AI Competency D... suggested future measurement targets: causal effects of specific interventions o...
The study conducted a cross-sectional online survey of more than 600 higher-education students and educators from multiple world regions.
Cross-sectional online survey; sample size reported as >600 participants; recruitment targeted a mix of disciplines and institution types; survey mapped to UNESCO 2024 AI competency frameworks.
high positive Exploring Student and Educator Challenges in AI Competency D... sample size and participant composition (number of respondents; roles: students ...
Core supply‑chain management challenges targeted by simulation are production layout, product strategy, and managing volume and variety.
Survey and critique of simulation applications presented in the paper; conceptual taxonomy of application areas.
high positive A Review of Manufacturing Operations Research Integration in... effectiveness of simulation in addressing production layout, product strategy, a...
The paper proposes a 'manufacturing operation tree'—an organizationally structured framework—to guide development of more realistic, validated, and industry‑relevant simulation models.
Conceptual/modeling output in the paper (diagram and explanation of the manufacturing operation tree); theoretical development rather than empirical testing.
high positive A Review of Manufacturing Operations Research Integration in... guidance for simulation model design, potential for improved model realism and v...
Standardizing datasets, benchmarks, and evaluation protocols (including real-time metrics and resource/latency measurements) is necessary to improve comparability and deployment relevance.
Surveyed inconsistencies and methodological shortcomings motivate the recommendation for standardization; many papers call for better benchmarks.
high positive International Journal on Cybernetics & Informatics comparability of evaluations and measurement of deployment-relevant metrics
Hybrid architectures combining rule-based filters with ML classifiers and ensembles are used to improve detection performance and reduce false positives.
Comparative analysis and examples from the literature where multi-stage or hybrid pipelines are proposed and evaluated.
high positive International Journal on Cybernetics & Informatics false positive rate / overall detection performance
Econometric and causal-inference tools (difference-in-differences, instrumental variables, randomized encouragement designs) are needed to estimate long-term effects of personalized robot interventions.
Recommended methodological agenda for AI economists in the paper; no applied causal studies presented.
high positive Reimagining Social Robots as Recommender Systems: Foundation... causal estimates of long-term intervention effects (treatment effect sizes, iden...
Research and deployment will require new datasets: longitudinal multimodal interaction logs, user preference surveys, simulated user populations, and ethically annotated datasets for fairness and safety evaluation.
Data & Methods recommendations based on identified empirical needs; no dataset release or analysis in this paper.
high positive Reimagining Social Robots as Recommender Systems: Foundation... availability and quality of recommended datasets (longitudinality, multimodality...
Measuring welfare impact of personalized robots requires going beyond engagement to include non-market outcomes such as well-being, autonomy, and mental health.
Methodological recommendation in the implications and evaluation sections; no empirical measures provided.
high positive Reimagining Social Robots as Recommender Systems: Foundation... welfare metrics (well-being scores, autonomy measures, mental health assessments...
A/B testing and longitudinal field studies are necessary for real-world validation of robot personalization, and metrics should include welfare-oriented outcomes (well-being, trust) in addition to engagement.
Recommended evaluation strategy drawing from HRI and RS experimental standards; no field trials reported in this work.
high positive Reimagining Social Robots as Recommender Systems: Foundation... welfare metrics (well-being, trust), engagement metrics, long-term behavioral ch...
Prior to live trials, offline RS evaluation metrics (precision/recall, NDCG), counterfactual/off-policy estimators, and simulated users should be used to validate personalization policies.
Methodological recommendation based on RS evaluation practices; no empirical comparison with live trials in robots presented.
high positive Reimagining Social Robots as Recommender Systems: Foundation... reliability of offline evaluation (correlation with online performance), risk re...
Contextual bandits and counterfactual/off-policy learning can enable safe exploration and off-policy evaluation when adapting robot interactions from logged data.
Methodological synthesis referencing contextual bandit and counterfactual learning techniques from RS and causal inference; no robotic implementation experiments reported.
high positive Reimagining Social Robots as Recommender Systems: Foundation... safe exploration trade-offs (regret), off-policy evaluation accuracy (e.g., IPS/...
Sequence-aware recommenders (RNNs, Transformers, Markov/session-based models) are suitable for modeling session dynamics and short-term preference shifts in robot interactions.
Survey of sequence/temporal RS models and their typical use cases; conceptual recommendation only.
high positive Reimagining Social Robots as Recommender Systems: Foundation... session-level prediction accuracy, short-term preference prediction performance
RS tooling covers long-term user profiles, short-term/session signals, context-awareness, multi-objective ranking, and evaluation methods suited for personalization at scale.
Review of recommender-systems methods and tooling in the literature; conceptual synthesis without empirical new data.
high positive Reimagining Social Robots as Recommender Systems: Foundation... capability to model multi-timescale preferences and to perform scalable personal...
Recommender systems are specialized in representing, predicting, and ranking user preferences across time and contexts (e.g., collaborative filtering, content-based models, sequential/session models).
Established RS literature surveyed and cited as the basis for the claim; conceptual argument, no new experiments.
high positive Reimagining Social Robots as Recommender Systems: Foundation... preference prediction/ranking accuracy across temporal and contextual settings
Perceived customer value is the core determinant of value-based pricing (VBP) decisions in digital marketing.
Systematic Literature Review (SLR) of 30 scholarly articles (Scopus, 2020–2025) coded into thematic categories; multiple included studies emphasize perceived value as central to pricing decisions.
high positive Pricing Strategy in Digital Marketing: A Systematic Review o... Pricing decisions / price levels (determination by perceived customer value)
Digital trade development raises city-level house prices in China in a robust, linear manner.
City-level panel regressions using a constructed digital trade index (entropy-TOPSIS aggregation of multiple indicators). Authors report tests for nonlinearity (none found) and multiple robustness checks. Sample: Chinese cities (years and exact sample size not specified in the summary).
Breakthroughs in structure prediction arise from end‑to‑end deep models that combine evolutionary information (MSAs, coevolutionary signals), geometric constraints and equivariant architectures, and large‑scale pretraining on sequence databases.
Paper describes methodological components: end‑to‑end architectures using MSAs, SE(3)/E(3)-equivariant layers, transformer‑based pretraining on UniRef/UniProt/metagenomic catalogs; no quantitative ablation studies are provided in the text.
high positive Protein structure prediction powered by artificial intellige... improvement in predictive performance attributable to combined modeling componen...
Algeria’s national approach centers on capacity building and technological independence as central security priorities in its AI strategy.
Analysis of Algeria’s national AI and security documents and related policy texts cited in the comparative case review.
high positive <b>Regulating AI in National Security: A Comparative S... policy emphasis on domestic capacity building and technological independence
The EU has developed a detailed, rights‑protective regulatory framework that includes procedural safeguards and explicit risk prohibitions for AI.
Qualitative document analysis of EU regulatory acts and strategies (e.g., bloc‑level AI regulatory proposals and legal texts) and comparative literature review.
high positive <b>Regulating AI in National Security: A Comparative S... regulatory comprehensiveness and degree of legal rights protection in AI governa...
Practical takeaway: economists should treat consent design as a lever that changes data availability and incorporate consent frictions into demand and production-side models; they should collaborate with HCI and legal scholars to design experiments capturing behavioral and welfare effects.
Recommendation from the workshop summary intended for economists; based on interdisciplinary discussions and agendas rather than tested interventions.
high positive Moving Beyond Clicks: Rethinking Consent and User Control in... integration of consent design into economic models and interdisciplinary collabo...
The workshop produced interdisciplinary outputs including personas, prototypes, and a research agenda to better align user capabilities and values with data-driven AI systems.
Documented workshop activities (Futures Design Toolkit, co-design, position papers) and stated expected deliverables in the workshop summary; these are reported outputs rather than evaluated outcomes.
high positive Moving Beyond Clicks: Rethinking Consent and User Control in... deliverables produced (personas, prototypes, research agenda)
Creators explicitly name advertising, direct sales, affiliate marketing, and revenue-sharing models as common monetization channels for GenAI-enabled content.
Explicit references to these monetization channels appeared repeatedly across the 377 videos and were extracted during thematic coding.
high positive Monetizing Generative AI: YouTubers' Collective Knowledge on... types of monetization channels mentioned in videos
Practical measurement guidance: researchers and practitioners should use repeated sampling (high-frequency and multi-day), compute bootstrap confidence intervals for citation shares and prevalence, run rank-stability analyses, and determine required sample size empirically via pilots.
Methodological recommendations grounded in the paper's empirical findings (non-determinism, heavy tails, wide bootstrap CIs) and demonstrated use of repeated sampling and bootstrap/resampling techniques in the study.
high positive Quantifying Uncertainty in AI Visibility: A Statistical Fram... robustness and reliability of visibility metrics (as improved by recommended mea...
XAI analyses (e.g., SHAP / feature importance) indicate that forecasted features are among the top contributors to model predictions.
Feature attribution experiments described in the paper using SHAP or similar methods showing high importance scores for TSFM-generated forecasted features in the downstream regression.
high positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Feature attribution / importance ranking
The forecasted features produced by a frozen TSFM drive most of the predictive gains.
Ablation studies reported in the paper that remove forecasted features and measure performance degradation, plus XAI analyses (feature importance / SHAP) showing forecasted features rank highly.
high positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Attributable change in MAE when forecasted features are included vs. removed; fe...
A hybrid architecture where cross-domain integrators encapsulate complex subgraphs into well-structured “resource slices” reduces price volatility (approximately 70–75%) without losing throughput.
Ablation experiments comparing baseline decentralised market vs hybrid integrator architecture across simulation configurations (subset of the 1,620 runs, multiple random seeds per configuration). The paper reports ~70–75% reduction in measured price volatility metrics for hybrid vs non-hybrid cases while throughput remained statistically indistinguishable.
high positive Real-Time AI Service Economy: A Framework for Agentic Comput... percentage reduction in price volatility (~70–75%); system throughput (value/thr...
Agents detected up to 65% of vulnerabilities in some experimental settings.
Reported detection rate maxima from the study's experiments on certain model/scaffold/task combinations.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... vulnerability_detection_rate (peak_value_reported = ~65%)
The authors constructed a contamination-free dataset of 22 real-world smart-contract security incidents that postdate every evaluated model's release.
Curation procedure described in the methods: 22 incidents selected to occur after all model release dates to prevent leakage.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... contamination_free_dataset_size (22 incidents)
This study expanded the evaluation matrix to 26 agent configurations spanning four model families and three scaffolding approaches.
Methods reported in this study specifying 26 agent configurations, four model families, and three scaffolds.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... evaluation_matrix_size (agent_configurations; model_families; scaffolds)
EVMbench (OpenAI, Paradigm, OtterSec) reported agents detecting up to 45.6% of vulnerabilities and achieving exploitation on 72.2% of a curated subset.
Reported metrics from the original EVMbench paper/benchmark (as summarized in this study).
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... vulnerability_detection_rate; exploitation_success_rate (on curated subset)
The penalized framework induces centroid estimation and dataset-specific shrinkage whose strength is controlled by a penalty parameter, enabling tunable information sharing.
Method formulation in the paper: penalized likelihood with KL term; derivation showing centroid estimated from pooled datasets and penalty parameter governing shrinkage magnitude; discussion of tuning.
high positive Redefining shared information: a heterogeneity-adaptive fram... centroid estimate and degree of shrinkage (dependence on penalty parameter)
The KL-penalized estimators achieve provably lower mean squared error (MSE) than dataset-specific maximum likelihood estimators.
Non-asymptotic and/or asymptotic analyses provided in the paper that compare MSE of KL-penalized estimators to MLEs (mathematical proofs/sketches in theoretical section).
high positive Redefining shared information: a heterogeneity-adaptive fram... mean squared error of parameter estimates (MSE)
The KL-based shrinkage estimators adapt to the true degree of shared information across datasets (i.e., they automatically perform partial pooling when appropriate).
Theoretical characterization of the estimator's dependence on the penalty strength and centroid, plus simulation studies varying degree/structure of heterogeneity to show adaptive behavior.
high positive Redefining shared information: a heterogeneity-adaptive fram... amount of shrinkage / effective pooling as a function of heterogeneity (adaptive...
A KL-divergence penalty that shrinks dataset-specific distributions toward a learned centroid yields simple closed-form estimators for linear models.
Methodological development in the paper: formulation of a penalized likelihood/objective using KL divergence; algebraic derivations producing closed-form solutions for the centroid and shrunken dataset estimates (closed forms presented in the paper).
high positive Redefining shared information: a heterogeneity-adaptive fram... analytic form of the estimator (existence of closed-form solutions for centroid ...
The learned adaptive policy outperformed a fixed-wrench baseline by an average of 10.9% across five material setups.
Empirical evaluation: comparison between learned adaptive policy and a fixed-wrench policy on five different material setups; the paper reports an average improvement of ~10.9% (the exact performance metric formulation and per-setup statistics are not provided in the summary).
high positive Learning Adaptive Force Control for Contact-Rich Sample Scra... aggregate task performance (reported as average percent improvement over baselin...
Integrating AI (notably ML and NLP) meaningfully automates routine software engineering tasks across requirements management, code generation, testing, and maintenance.
Systematic literature review of prior AI-for-SE work combined with an empirical survey of software engineering professionals reporting usage and examples of tool-supported automation; sample size for the survey not specified in the summary.
high positive Artificial Intelligence as a Catalyst for Innovation in Soft... degree of task automation (e.g., frequency or share of routine tasks automated)
A speculative WikiRAT instantiation on Wikipedia illustrates RATs' design and potential uses.
The paper presents WikiRAT as a speculative prototype/illustration; no large-scale deployment or user study of WikiRAT is reported.
high positive Chasing RATs: Tracing Reading for and as Creative Activity existence of a prototype illustration (WikiRAT)
RATs record sequences of interaction: traversal (what is read and in what order), association (links and connections the reader forms), and reflection (annotations, notes, time spent), producing inspectable, shareable trajectories.
Design specification within the paper and description of data types RATs would collect (ordered page/navigation logs, hyperlinks followed, time-on-page, annotations, saved excerpts, tags, notes). This is a definitional claim about the proposed system rather than empirical measurement.
high positive Chasing RATs: Tracing Reading for and as Creative Activity captured interaction traces (traversal, association, reflection) as data
An autoencoder-based ODE emulator that maps parameter values to latent trajectories can flexibly generate different solution paths conditioned on parameters.
Architecture and experiments: authors present a novel encoder/decoder ODE emulator that learns latent representation of trajectories and maps parameter vectors to latent trajectories; empirical examples provided (details not in summary).
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... ability to reconstruct/generate ODE solution trajectories conditioned on paramet...
A quantile emulator trained conditional on MCMC parameter draws can produce conditional quantile predictions without training a Bayesian neural network.
Method and empirical demonstration: paper describes and implements a quantile emulator (network trained to predict conditional quantiles across parameter draws).
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... accuracy of predicted conditional quantiles