The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4793 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Productivity Remove filter
Creators explicitly name advertising, direct sales, affiliate marketing, and revenue-sharing models as common monetization channels for GenAI-enabled content.
Explicit references to these monetization channels appeared repeatedly across the 377 videos and were extracted during thematic coding.
high positive Monetizing Generative AI: YouTubers' Collective Knowledge on... types of monetization channels mentioned in videos
XAI analyses (e.g., SHAP / feature importance) indicate that forecasted features are among the top contributors to model predictions.
Feature attribution experiments described in the paper using SHAP or similar methods showing high importance scores for TSFM-generated forecasted features in the downstream regression.
high positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Feature attribution / importance ranking
The forecasted features produced by a frozen TSFM drive most of the predictive gains.
Ablation studies reported in the paper that remove forecasted features and measure performance degradation, plus XAI analyses (feature importance / SHAP) showing forecasted features rank highly.
high positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Attributable change in MAE when forecasted features are included vs. removed; fe...
Practitioners adopt methodological adaptations — including adaptive/longitudinal designs, versioning/documentation, stratification/moderation analyses, robustness checks, mixed methods, deployment-stage monitoring, and pre-analysis plans — to mitigate validity threats.
Reported mitigation strategies aggregated from the 16 semi-structured interviews and described in the paper's 'Practitioner solutions' section.
high positive RCTs & Human Uplift Studies: Methodological Challenges and P... use and types of methodological adaptations employed by practitioners
A hybrid architecture where cross-domain integrators encapsulate complex subgraphs into well-structured “resource slices” reduces price volatility (approximately 70–75%) without losing throughput.
Ablation experiments comparing baseline decentralised market vs hybrid integrator architecture across simulation configurations (subset of the 1,620 runs, multiple random seeds per configuration). The paper reports ~70–75% reduction in measured price volatility metrics for hybrid vs non-hybrid cases while throughput remained statistically indistinguishable.
high positive Real-Time AI Service Economy: A Framework for Agentic Comput... percentage reduction in price volatility (~70–75%); system throughput (value/thr...
Agents detected up to 65% of vulnerabilities in some experimental settings.
Reported detection rate maxima from the study's experiments on certain model/scaffold/task combinations.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... vulnerability_detection_rate (peak_value_reported = ~65%)
The authors constructed a contamination-free dataset of 22 real-world smart-contract security incidents that postdate every evaluated model's release.
Curation procedure described in the methods: 22 incidents selected to occur after all model release dates to prevent leakage.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... contamination_free_dataset_size (22 incidents)
This study expanded the evaluation matrix to 26 agent configurations spanning four model families and three scaffolding approaches.
Methods reported in this study specifying 26 agent configurations, four model families, and three scaffolds.
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... evaluation_matrix_size (agent_configurations; model_families; scaffolds)
EVMbench (OpenAI, Paradigm, OtterSec) reported agents detecting up to 45.6% of vulnerabilities and achieving exploitation on 72.2% of a curated subset.
Reported metrics from the original EVMbench paper/benchmark (as summarized in this study).
high positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... vulnerability_detection_rate; exploitation_success_rate (on curated subset)
Under NFD, agents are initialized with minimal scaffolding and grown through structured conversational interaction with domain practitioners, with the Knowledge Crystallization Cycle consolidating tacit dialogue into structured, reusable knowledge assets.
Architectural specification and operational formalism in the paper; supported by a detailed case study (iterative co-development with financial analysts, logged interaction transcripts and produced artifacts). Sample size for the case study is not specified.
high positive Nurture-First Agent Development: Building Domain-Expert AI A... amount and structure of crystallized knowledge/assets produced from interactions
Label changes across rounds concentrate on statements judged as ambiguous; statement ambiguity drives most label changes.
Participants provided labeling rationale and self-reported uncertainty for each of the 30 statements per round; analyses showed higher change rates for statements with higher self-reported uncertainty/ambiguous wording.
high positive Exploring Indicators of Developers' Sentiment Perceptions in... frequency of label changes per statement and its association with self-reported ...
The penalized framework induces centroid estimation and dataset-specific shrinkage whose strength is controlled by a penalty parameter, enabling tunable information sharing.
Method formulation in the paper: penalized likelihood with KL term; derivation showing centroid estimated from pooled datasets and penalty parameter governing shrinkage magnitude; discussion of tuning.
high positive Redefining shared information: a heterogeneity-adaptive fram... centroid estimate and degree of shrinkage (dependence on penalty parameter)
The KL-penalized estimators achieve provably lower mean squared error (MSE) than dataset-specific maximum likelihood estimators.
Non-asymptotic and/or asymptotic analyses provided in the paper that compare MSE of KL-penalized estimators to MLEs (mathematical proofs/sketches in theoretical section).
high positive Redefining shared information: a heterogeneity-adaptive fram... mean squared error of parameter estimates (MSE)
The KL-based shrinkage estimators adapt to the true degree of shared information across datasets (i.e., they automatically perform partial pooling when appropriate).
Theoretical characterization of the estimator's dependence on the penalty strength and centroid, plus simulation studies varying degree/structure of heterogeneity to show adaptive behavior.
high positive Redefining shared information: a heterogeneity-adaptive fram... amount of shrinkage / effective pooling as a function of heterogeneity (adaptive...
A KL-divergence penalty that shrinks dataset-specific distributions toward a learned centroid yields simple closed-form estimators for linear models.
Methodological development in the paper: formulation of a penalized likelihood/objective using KL divergence; algebraic derivations producing closed-form solutions for the centroid and shrunken dataset estimates (closed forms presented in the paper).
high positive Redefining shared information: a heterogeneity-adaptive fram... analytic form of the estimator (existence of closed-form solutions for centroid ...
The learned adaptive policy outperformed a fixed-wrench baseline by an average of 10.9% across five material setups.
Empirical evaluation: comparison between learned adaptive policy and a fixed-wrench policy on five different material setups; the paper reports an average improvement of ~10.9% (the exact performance metric formulation and per-setup statistics are not provided in the summary).
high positive Learning Adaptive Force Control for Contact-Rich Sample Scra... aggregate task performance (reported as average percent improvement over baselin...
Integrating AI (notably ML and NLP) meaningfully automates routine software engineering tasks across requirements management, code generation, testing, and maintenance.
Systematic literature review of prior AI-for-SE work combined with an empirical survey of software engineering professionals reporting usage and examples of tool-supported automation; sample size for the survey not specified in the summary.
high positive Artificial Intelligence as a Catalyst for Innovation in Soft... degree of task automation (e.g., frequency or share of routine tasks automated)
A speculative WikiRAT instantiation on Wikipedia illustrates RATs' design and potential uses.
The paper presents WikiRAT as a speculative prototype/illustration; no large-scale deployment or user study of WikiRAT is reported.
high positive Chasing RATs: Tracing Reading for and as Creative Activity existence of a prototype illustration (WikiRAT)
RATs record sequences of interaction: traversal (what is read and in what order), association (links and connections the reader forms), and reflection (annotations, notes, time spent), producing inspectable, shareable trajectories.
Design specification within the paper and description of data types RATs would collect (ordered page/navigation logs, hyperlinks followed, time-on-page, annotations, saved excerpts, tags, notes). This is a definitional claim about the proposed system rather than empirical measurement.
high positive Chasing RATs: Tracing Reading for and as Creative Activity captured interaction traces (traversal, association, reflection) as data
An autoencoder-based ODE emulator that maps parameter values to latent trajectories can flexibly generate different solution paths conditioned on parameters.
Architecture and experiments: authors present a novel encoder/decoder ODE emulator that learns latent representation of trajectories and maps parameter vectors to latent trajectories; empirical examples provided (details not in summary).
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... ability to reconstruct/generate ODE solution trajectories conditioned on paramet...
A quantile emulator trained conditional on MCMC parameter draws can produce conditional quantile predictions without training a Bayesian neural network.
Method and empirical demonstration: paper describes and implements a quantile emulator (network trained to predict conditional quantiles across parameter draws).
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... accuracy of predicted conditional quantiles
The method is architecture-agnostic: uncertainty handling via parameter samples allows use of any deterministic neural-network architecture (e.g., quantile regressors, autoencoders) without specialized Bayesian layers.
Conceptual argument and demonstrations: authors implement a quantile emulator and an autoencoder-based ODE emulator as examples, showing the same uncertainty treatment applies to different network types.
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... applicability across network architectures (demonstrated via example implementat...
By sampling training parameter vectors from a calibrated posterior (via MCMC), the surrogate avoids training on unphysical or implausible parameter configurations.
Design choice described in methods: MCMC sampling is used to draw parameter samples from the model-parameter distribution/posterior, thereby focusing training data on plausible regions; no experiments provided here quantify frequency of unphysical samples under alternative schemes.
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... proportion of training samples that fall in implausible/unphysical parameter reg...
The surrogate loop (build/update GP → select acquisition target → inner optimization → propose evaluation → evaluate with true model → update surrogate) can be parameterized so that inner objective and acquisition encode whether one seeks minima, saddles, or double-ended transitions.
Detailed methodological description in the paper of the six-step loop and how inner objectives/acquisition are changed to represent different search tasks; supported by example implementations in code.
high positive Bayesian Optimization with Gaussian Processes to Accelerate ... flexibility of the surrogate loop to represent multiple search objectives (quali...
The accompanying Rust code implements the same six-step surrogate loop across all applications, demonstrating practical reproducibility of the framework.
Authors state that pedagogical Rust code is provided showing the exact same loop running all applications; code repository accompanies the paper.
high positive Bayesian Optimization with Gaussian Processes to Accelerate ... availability and content of provided implementation (existence of code that runs...
An adaptive trust radius constrains surrogate-guided steps to regions where the surrogate is reliable (trust-region control).
Methodological description of adaptive trust-radius control in the surrogate loop; used in experiments demonstrating improved reliability of steps proposed by the surrogate.
high positive Bayesian Optimization with Gaussian Processes to Accelerate ... step sizes accepted by surrogate-guided proposals and resulting reliability (ste...
Acquisition criteria (active learning) drive which points are evaluated next; different acquisition functions implement the different search tasks (minimization, single-point saddles, double-ended searches).
Method section describing task-specific acquisition functions and their role in selecting evaluation points; implemented in the Rust code and used in experiments reported in the paper.
high positive Bayesian Optimization with Gaussian Processes to Accelerate ... selection of next-evaluation points and resulting search efficiency (algorithmic...
A unified Bayesian optimization framework—implemented as a six-step surrogate loop—handles minimization, single-point saddle searches, and double-ended saddle searches by changing only the inner optimization target and acquisition criterion.
Methodological description in the paper: presentation of a six-step surrogate loop (build/update GP → select acquisition target → inner optimization on surrogate → propose evaluation points → evaluate with true model → update surrogate) parameterized so inner objective and acquisition encode different tasks; accompanied by pedagogical Rust code implementing the same loop for all tasks.
high positive Bayesian Optimization with Gaussian Processes to Accelerate ... ability to run minimization and saddle-search algorithms within a single surroga...
PRF design decomposes into two independent dimensions: feedback source (where feedback text comes from) and feedback model (how that feedback is used to refine the query).
Paper's conceptual framing and controlled experiments that isolate and vary these two factors independently.
high positive A Systematic Study of Pseudo-Relevance Feedback with LLMs PRF design components (feedback source vs. feedback model)
The paper proposes specific operational and market recommendations: firms should invest in middleware and co-design partnerships; policymakers should fund shared QCSC infrastructure and workforce programs; researchers should prioritize interoperable middleware, scheduling models, and economic experiments on access-pricing.
Explicit recommendations section synthesizing prior architectural and economic analysis; prescriptive assertions based on conceptual arguments rather than experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer adoption of recommended investments/policies and their effect on access, standar...
Middleware standardization and interoperable APIs reduce switching costs and foster competition; lack of standards risks vendor lock-in and higher long-run costs.
Economic and systems-design argument drawing on well-understood effects of standardization in software ecosystems; no empirical QCSC-standardization case studies provided.
high positive Reference Architecture of a Quantum-Centric Supercomputer switching costs, level of competition, interoperability across QCSC offerings
QCSC reference architecture elements — e.g., QPU integration patterns, low-latency interconnects, orchestration and scheduling middleware, unified programming environments, data staging strategies — are required components to address current friction.
System decomposition and interface requirements derived from use-case analysis; proposed architecture components listed and motivated; no experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer presence/absence of specific architecture components and their theorized effect ...
The GNN provides greater stability (robustness over time and across conditions) than the MLP, with marked gains at low elevation angles where propagation is most variable.
Evaluation metrics in the experiments included stability/robustness over time and across elevation-angle conditions; reported performance shows larger relative gains for the GNN at low elevation angles.
high positive Federated Learning-driven Beam Management in LEO 6G Non-Terr... stability/robustness of beam predictions across time and elevation angles (espec...
A Graph Neural Network (GNN) model significantly outperforms a Multi-Layer Perceptron (MLP) baseline in beam prediction accuracy.
Supervised comparison reported in the paper between an MLP baseline and a GNN on realistic channel and beamforming data, evaluated with beam prediction accuracy metrics.
Policy recommendations include subsidizing complementary investments (data governance, training) rather than technology-only incentives; encouraging standards and interoperability; and funding evaluation studies to measure distributional effects and long-run productivity impacts.
Authors' policy section proposing these interventions based on case findings and broader policy implications.
high positive Optimizing integrated supply planning in logistics: Bridging... adoption of ISP, reduction in switching costs, quality of evaluation evidence, d...
The authors propose a conceptual optimisation framework emphasizing three pillars: digital integration (tech stack & data), collaboration (processes & governance), and continuous improvement (metrics, feedback loops).
Paper presents a conceptual framework derived from cross-case findings; theoretical/conceptual contribution rather than empirical estimation.
high positive Optimizing integrated supply planning in logistics: Bridging... framework components (no direct empirical outcome; intended to improve ISP imple...
The forecasting model was deployed with a human-in-the-loop mechanism that triggers on critical forecast deviations.
Pilot description in the paper documenting integration of H-in-the-loop rules for critical deviations during pilot deployment (single-case deployment evidence).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence and functioning of human-in-the-loop triggers for forecast deviations
The framework explicitly targets SME-specific risks (data scarcity, limited skills/budgets, and change resistance) and proposes mitigations such as staged pilots, human-in-the-loop designs, and clear governance.
Design rationale and operational recommendations within the paper addressing SME constraints (conceptual; no large-N testing).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence of SME-specific mitigation measures in the framework (staged pilots, H-...
An MLOps layer is included to provide continuous integration/deployment, monitoring, retraining, and governance for sustainable model maintenance.
Framework/component specification in the paper describing an MLOps layer and its responsibilities (conceptual design).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence of MLOps capabilities (CI/CD, monitoring, retraining, governance) in th...
The approach operationalizes AI adoption into seven sequential stages, each with specified deliverables, assigned roles, and gate/exit criteria.
Framework description in the paper enumerating seven sequential stages and documenting deliverables, role allocation, and gate criteria (conceptual / design artifact).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... number and specification of stages (operationalization of adoption process)
The paper proposes a practice-oriented, end-to-end algorithm for integrating AI into SME managerial decision loops grounded in CRISP-DM and extended with AI Canvas, an organizational digital-readiness assessment, and an MLOps layer.
Conceptual/framework development presented in the paper; synthesis of CRISP-DM, AI Canvas, a digital-readiness assessment, and an MLOps layer (no empirical sample required).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... existence and content of the proposed AI adoption algorithm/framework (design el...
Standards and governance frameworks (for model auditability, security, and alignment) will become economic infrastructure influencing adoption costs and market trust.
Conceptual argument linking governance to adoption and trust, drawing on normative risk analysis; no empirical governance impact studies included.
high positive How AI Will Transform the Daily Life of a Techie within 5 Ye... existence and adoption of standards/governance frameworks and their effect on AI...
Increasing AI autonomy magnifies ethical, safety, and value‑alignment concerns; robust human oversight and institutional governance are required.
Normative and risk analysis based on projected increases in system autonomy and illustrative failure modes; no formal safety audits included.
high positive How AI Will Transform the Daily Life of a Techie within 5 Ye... need/extent of human oversight and governance mechanisms (existence and strength...
Models and systems must include robust governance: transparency, explainability, provenance logging, versioning, and compliance checks to maintain trust and satisfy auditors/regulators.
Normative claim supported by recommended governance and evaluation practices described in the paper; no regulatory testing or audit case studies reported.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... governance/compliance indicators (e.g., presence of explainability reports, audi...
Cloud and distributed compute (data lakes, distributed training, streaming pipelines) provide the scalability needed to handle growing data and model complexity in financial analytics.
Technical claim supported by proposed infrastructure components in the paper; no benchmarking or capacity measurements provided.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... scalability measures (e.g., throughput, latency under load, time to train models...
Such frameworks—designed to be modular, scalable, and interoperable—enable pluggable AI modules (scenario analysis, cash‑flow forecasting, dynamic pricing) and easier integration with ERP/BI systems.
Architectural claim supported by system design principles listed in the paper (modular model repositories, model-serving layers, feature stores, API integration); presented as design best-practices rather than empirical validation.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... system integration metrics (e.g., number of pluggable modules, integration time,...
A systematic RM process—risk identification → analysis/assessment → evaluation/response → control implementation → monitoring and reporting—is a core component of effective practice.
Convergence of process descriptions across ISO 31000, COSO ERM, and multiple reviewed publications identified via thematic analysis.
high positive The Role of Risk Management as an Organizational Management ... completeness/consistency of RM processes
Integration of risk management with strategy-setting and operational processes is essential to realize RM benefits.
Thematic findings from the literature review and recommendations in established frameworks (ISO 31000, COSO ERM); synthesized across peer-reviewed and practitioner literature.
high positive The Role of Risk Management as an Organizational Management ... alignment of RM with strategy and operations; realized RM benefits
An embedded risk culture and clear accountability across the organization are necessary enablers for effective risk management.
Repeatedly reported across reviewed literature and standards (e.g., ISO/COSO) in the thematic synthesis; supported by multiple secondary sources in the ten-year scope.
high positive The Role of Risk Management as an Organizational Management ... degree of RM cultural embedding; accountability; RM effectiveness
Leadership and governance commitment (board and senior management buy-in) is a core component required for effective risk management implementation.
Consistent identification of leadership/governance as an enabling factor across multiple peer-reviewed articles, books, and risk frameworks synthesized in the review; thematic analysis of literature over the last ten years.
high positive The Role of Risk Management as an Organizational Management ... effectiveness of risk management implementation / successful RM adoption