The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (7448 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
The review grouped training regimes across the systems as supervised fine-tuning, verifier-in-the-loop reinforcement learning (RL), diffusion/graph generation, and agentic optimization.
Surveyed systems' training descriptions were classified into these training-regime categories during the review's analytical synthesis.
high null result Generative AI for Quantum Circuits and Quantum Code: A Techn... training regimes present among reviewed systems
The review organized artifacts along artifact-type axes: Qiskit code, OpenQASM programs, and circuit graphs.
Analytical organization described in the methods: artifact-type axis enumerated as Qiskit, OpenQASM, and circuit graphs across the surveyed systems.
high null result Generative AI for Quantum Circuits and Quantum Code: A Techn... artifact types covered in the field synthesis
"Quantum code" in this review is defined as program artifacts (Qiskit code, OpenQASM); quantum error-correcting code (QEC) generation was excluded.
Inclusion/exclusion criteria specified in the review explicitly limited scope to program artifacts such as Qiskit and OpenQASM and excluded QEC-focused works.
high null result Generative AI for Quantum Circuits and Quantum Code: A Techn... scope definition (inclusion/exclusion of QEC)
A structured scoping review (Hugging Face, arXiv, provenance tracing; Jan–Feb 2026) identified 13 generative systems and 5 supporting datasets relevant to quantum circuit / quantum code generation.
Structured search of Hugging Face model/dataset listings, arXiv literature, and provenance tracing conducted between January and February 2026; results yielded 13 systems and 5 datasets (sample counts reported in the review).
high null result Generative AI for Quantum Circuits and Quantum Code: A Techn... number of generative systems and datasets identified (13 systems, 5 datasets)
The reinforcement learning objective optimizes a combined utility that trades off task success and resource costs; the reward penalizes delays and failures.
Learning method section describes training the high-level orchestrator with an RL reward that penalizes delays (latency/resource consumption) and failures, and that algorithmic/hyperparameter details are provided.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... training objective: combined utility of task success and resource cost
The experiments use empirical LLM latency profiles measured from ALFRED tasks to model realistic inference delays in simulation.
Environment/evaluation description states use of an embodied task suite based on ALFRED and empirical latency profiles to model realistic LLM inference delays.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... latency modeling (empirical latency profiles)
Baselines for comparison include fixed reasoning strategies (always reason, never reason), heuristic triggers for invoking LLMs, and ablations of RARRL components.
Paper lists these baselines explicitly in the Baselines and comparisons section and reports experiments comparing RARRL to them.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... baseline policy types used for comparison
The high-level orchestration policy uses observations that include current sensory observation, execution history, and remaining resources (e.g., remaining time or compute budget).
Key Points and Methods specify the observation space used by the orchestrator, listing sensory inputs, execution history, and resource remaining as inputs.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... policy input features (sensory observation, execution history, remaining resourc...
RARRL trains only a high-level orchestration policy via reinforcement learning and does not retrain the existing low-level control/policy modules end-to-end.
Methods/Model architecture describe a hierarchical approach where low-level controllers are existing modules and are not retrained; RL is applied to the high-level orchestrator.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... level of learning: high-level orchestration policy trained vs. low-level control...
RARRL (Resource-Aware Reasoning via Reinforcement Learning) is a hierarchical orchestration framework that learns a high-level policy to decide when an embodied agent should invoke LLM-based reasoning, which reasoning role to use, and how much compute budget to allocate.
Paper describes a hierarchical design with a learned high-level RL orchestrator that issues discrete decisions about reasoning invocation, reasoning role/mode, and compute budget allocation; architecture and decision space specified in Methods.
high null result When Should a Robot Think? Resource-Aware Reasoning via Rein... decision variables: whether to call an LLM, reasoning role/mode selected, comput...
BenchPreS defines two complementary metrics—Misapplication Rate (MR) and Appropriate Application Rate (AAR)—to quantify over‑application and correct personalization, respectively.
Methodological contribution described in the paper: explicit definitions of MR as fraction of inappropriate applications and AAR as fraction of appropriate applications, used to score model behavior.
high null result BenchPreS: A Benchmark for Context-Aware Personalized Prefer... Definition and use of MR and AAR metrics
Pilot randomized or quasi-experimental implementations of reduced workweeks (across firms, industries, or regions) are needed to measure effects on employment, productivity, wages, and consumption.
Research-design recommendation motivated by lack of contemporary causal evidence; not an empirical finding but a stated priority for rigorous testing.
high null result A Shorter Workweek as a Policy Response to AI-Driven Labor D... measured causal effects of reduced workweeks on employment, productivity, wages,...
There is limited direct causal identification separating technology-driven layoffs from incentive-driven layoffs in current firm-level data, creating a need for new firm-panel datasets linking AI adoption, executive pay/ownership, layoff decisions, and local demand outcomes.
Stated limitation of the paper and research-priority recommendation; assessment based on literature gaps noted in the synthesis rather than empirical gap quantification.
high null result A Shorter Workweek as a Policy Response to AI-Driven Labor D... availability/coverage of firm-level panel data capable of separating AI effects ...
Observed layoffs should be treated in empirical research as outcomes of firm governance and incentive structures; econometric studies estimating displacement from AI must control for managerial incentives and financial pressures.
Methodological recommendation based on the conceptual argument and literature linking governance/incentives to firm behavior; no new empirical demonstration provided.
high null result A Shorter Workweek as a Policy Response to AI-Driven Labor D... bias in estimated causal effect of AI on layoffs when not controlling for manage...
Research priorities include empirical testing and simulation of ISB-based control systems, cost–benefit analysis of proactive versus reactive AI governance, and distributional impact assessments.
Explicit research agenda proposed by the author (conceptual recommendation), not empirical results.
high null result DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... n/a (research agenda recommendation rather than an empirical outcome)
Key empirical metrics introduced and used are: AI adoption rates (sector-level intensity), Skill shift index, Hybrid job share, and employment levels/net changes by sector.
Methods description listing the constructed metrics used in the simulated dataset and subsequent analyses (definitions and calculation procedures provided in the paper).
high null result AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Defined metrics (AI adoption rate, Skill shift index, Hybrid job share, Employme...
The study's main limitations include reliance on a simulated dataset rather than exhaustive administrative microdata, literature limited to selected publishers/years, and correlational (not causal) identification of some effects.
Authors' explicitly stated limitations in the paper's methods and discussion sections describing data choices (simulated dataset, selected publishers 2020–2024) and the observational/correlational nature of several analyses.
high null result AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Study validity/generalizability limitations
Further research is needed—randomized controlled trials, long-term impact measurement (earnings, employment stability, skill accumulation), distributional analysis, and model audits for bias.
Authors' stated research agenda and recommendations; not an empirical finding but a methodological recommendation following the pilot.
high null result AI-Driven Skill Mapping and Gig Economy Matching Algorithm f... long-term earnings, employment stability, skill accumulation, distributional out...
The authors explicitly note limitations: the study focuses on prediction (not causation), results are sensitive to data quality, workforce records may contain biases, and practical constraints like privacy and deployment complexity limit direct operational adoption.
Limitations section described by the authors listing prediction-versus-causation distinction, sensitivity to data quality, potential biases, privacy concerns, and deployment complexity.
high null result Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Scope and limitations of study conclusions (qualitative)
The study used a reproducible modeling pipeline (data cleaning, feature engineering, model training and tuning, systematic evaluation) applied to several freely available workforce datasets to enable replication.
Methods section describes a reproducible workflow including preprocessing steps, engineered features, hyperparameter tuning for each model class, cross-validation, and use of publicly available datasets.
high null result Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Reproducibility of predictive modeling workflow (procedural, not an empirical pe...
This work is conceptual/theoretical and reports no original empirical dataset; it explicitly calls for mixed-methods empirical validation (case studies, field experiments, longitudinal studies), measurement development, and multi-level data collection.
Explicit methodological statement in the paper describing its nature as a theoretical synthesis and listing empirical needs; no empirical sample provided.
high null result Revolutionizing Human Resource Development: A Theoretical Fr... presence/absence of original empirical data in the paper (none)
Four autonomous agents were benchmarked on the same fresh CTF challenge set alongside human teams.
Benchmarking experiment described in the study: four autonomous AI agents evaluated on the identical fresh challenge set used in the live onsite CTF.
high null result Understanding Human-AI Collaboration in Cybersecurity Compet... agent performance metrics on the fresh CTF challenge set (success rates, traject...
Data and methods: the study used an online experiment with 861 online-retail employees performing short-duration, virtual, task-focused collaborations; analyses focused on direct effects, moderation (emotion and partner type), mediation (service empathy), and moderated-mediation.
Methods description in the paper specifying design, sample size (n = 861), task context (temporary virtual teamwork), and analytic approach (hypothesis tests including moderation and mediation analyses).
high null result Adoption of AI partners in temporary tasks: exploring the ef... NA (methodological claim about study design and analyses)
Teamwork partner type (human vs AI) has no direct, significant effect on collaboration proficiency for temporary virtual tasks.
Online experiment with employees in the online-retail industry (n = 861). Hypothesis testing showed no significant main effect of partner type on the outcome variable 'collaboration proficiency' in the reported analyses.
high null result Adoption of AI partners in temporary tasks: exploring the ef... collaboration proficiency
Empirical strategy: the main identification strategy uses panel regressions with quadratic AI specification and interaction terms, controlling for firm covariates, employing fixed effects and robustness checks (alternative measures, sub-samples).
Methods section description: panel regressions including AI and AI^2, interactions for moderators, controls, fixed effects, and robustness analyses reported in the paper.
high null result Attention to Whom? AI Adoption and Corporate Social Responsi... N/A (methodological claim)
Data/sample claim: the empirical analysis uses a panel of 2,575 Chinese listed firms observed from 2013 to 2023.
Paper-stated sample description (panel dataset covering 2013–2023, N = 2,575 firms).
high null result Attention to Whom? AI Adoption and Corporate Social Responsi... N/A (sample description)
The paper recommends an empirical research agenda including field experiments comparing teams with and without AI mediation, structural models of labor supply and wages under reduced language frictions, microdata analysis of adopters, and measurement studies for coordination costs and mediated-action reliability.
Explicit recommendations and research agenda stated in the paper; this is a descriptive claim about the paper's content rather than an empirical finding.
high null result AI as a universal collaboration layer: Eliminating language ... existence of the recommended research agenda items in the paper
The paper's primary approach is conceptual/theoretical development and agenda-setting; it does not report large-scale empirical or experimental data.
Explicit methods statement in the paper: synthesis, illustrative examples, framework development; absence of reported empirical sample or experiments.
high null result AI as a universal collaboration layer: Eliminating language ... presence/absence of empirical/experimental data in the paper
The study's empirical base consists of 40 semi-structured interviews with cross-industry project practitioners in the UK, analyzed using thematic qualitative methods.
Stated data and methods in the paper: sample size (40), interview method, cross-industry sampling, and thematic analysis.
high null result AI in project teams: how trust calibration reconfigures team... study sample and methodology (empirical basis)
Limitation: Implementation heterogeneity — the costs and feasibility of the recommended HR changes vary by context and may affect generalisability.
Explicit limitation acknowledged in the paper; drawn from theoretical reasoning about contextual heterogeneity and practitioner variability.
high null result Symbiarchic leadership: leading integrated human and AI cybe... implementation costs; feasibility; effect on generalisability
Limitation: The framework is conceptual and requires empirical validation across sectors, firm sizes and AI‑intensity levels.
Explicit limitation acknowledged by the authors; based on the paper's method (theoretical synthesis, no original data).
high null result Symbiarchic leadership: leading integrated human and AI cybe... generalizability and empirical validity across contexts
The paper generates empirically testable propositions (e.g., how leader practices affect AI adoption speed, task reallocation, productivity, error rates, employee well‑being and turnover) and suggests natural‑experiment settings for evaluation.
Stated methodological output of the conceptual synthesis; the paper lists candidate empirical tests and research opportunities but contains no original empirical tests.
high null result Symbiarchic leadership: leading integrated human and AI cybe... AI adoption speed; task reallocation; productivity; error rates; employee well‑b...
Typical methods used are deep learning for property prediction and representation learning, protein-structure modelling tools, generative models for de novo design, NLP for knowledge extraction, and ADME/Tox in silico models integrated with traditional computational chemistry.
Methodological survey in the paper listing these approaches and examples of their application.
high null result Has AI Reshaped Drug Discovery, or Is There Still a Long Way... methods deployed in AI-driven drug discovery workflows
Commonly used data types in AI-driven drug discovery include biochemical/binding assay data, protein structural data, HTS results, ADME/Tox and PK datasets, omics/phenotypic readouts, and scientific literature/patents.
Cataloguing of data sources used across studies and company pipelines described in the paper.
high null result Has AI Reshaped Drug Discovery, or Is There Still a Long Way... types of datasets employed in model training and discovery workflows
AI became widely adopted in pharmaceutical discovery during the 2010s, driven by greater compute, larger datasets, and advances in deep learning.
Historical overview and trend analysis in the paper referencing increased compute availability, growth in public and proprietary datasets, and the rise of deep-learning publications and tools over the 2010s.
high null result Has AI Reshaped Drug Discovery, or Is There Still a Long Way... timeline and adoption rate of AI methods in pharmaceutical discovery
The available evidence consists mainly of promising empirical studies and case studies, but there are few long-run, generalized ROI or productivity estimates; results are heterogeneous across therapeutic areas.
Self-described limitation of the narrative review: heterogeneity of study designs and outcomes precluded pooled quantitative estimates and long-run ROI assessment.
high null result From Algorithm to Medicine: AI in the Discovery and Developm... evidence quality (availability of long-run ROI/productivity estimates) and heter...
AI applications span the full drug development pipeline, including target discovery, in silico screening and de novo design, preclinical safety models, clinical trial design and patient selection/monitoring, and post-marketing surveillance.
Comprehensive literature synthesis across preclinical, clinical, and post-marketing sources in the narrative review summarizing documented uses across these stages.
high null result From Algorithm to Medicine: AI in the Discovery and Developm... coverage of pipeline stages by AI applications (scope)
Current evidence is illustrative rather than systematic; there is a lack of long-run, quantitative measures of AI’s effect on late-stage clinical outcomes in the literature reviewed.
Explicit methodological statement in the paper: study is an expert/opinion synthesis and narrative review with no new causal econometric estimates or primary experimental data.
high null result Learning from the successes and failures of early artificial... existence/availability of long-run quantitative measures linking AI adoption to ...
Suggested metrics for researchers and investors to monitor include R&D cycle time, cost per IND/NDA, proportion of projects using AI, success rates at development stages, market concentration measures, and investment flows into AI-enabled biotech vs incumbents.
Recommendations made in the Implications section as metrics to watch; no empirical tracking or baseline measures provided.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research recommended monitoring metrics for AI impact in pharma/biotech
Limitations of the analysis include limited empirical validation of archetypes or impacts and potential selection bias toward prominent firms and technologies.
Explicit limitations stated in the Data & Methods section of the paper.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research generalizability and representativeness of the paper's claims
The paper is an editorial/conceptual synthesis rather than a primary empirical study: it uses qualitative analysis and illustrative examples, and reports no new quantitative estimates.
Explicit statement in the Data & Methods section of the paper describing document type, approach, evidence base, and limitations.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research empirical evidence provision (absence of new quantitative data)
Ethical oversight and governance (addressing bias, consent, downstream risks) are critical constraints that must be addressed for AI to generate sustained benefits.
Normative synthesis referencing common ethical concerns; no empirical evaluation of oversight mechanisms in the paper.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research ethical acceptability and downstream risk mitigation
Transparency and auditability for model behavior, provenance, and decisions are essential for trustworthy deployment and regulatory acceptance.
Policy and governance synthesis drawing on regulatory dynamics; no empirical study of regulatory outcomes included.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research trustworthiness/regulatory acceptability of models
Rigorous model validation and reproducibility across datasets and settings are necessary constraints for successful AI deployment.
Normative claim in the editorial based on reproducibility concerns in ML and biomedical research; no reported validation trials within the paper.
high null result AI as the Catalyst for a New Paradigm in Biomedical Research reliability and generalizability of AI models across settings
The paper is primarily discursive and invitational: it opens a dialogue and proposes a research agenda rather than providing definitive empirical answers.
Stated methodological stance and limits: conceptual/philosophical analysis, interdisciplinary literature synthesis, qualitative/illustrative examples, and explicit note of no systematic empirical evaluation.
high null result At the table with Wittgenstein: How language shapes taste an... presence/absence of new empirical datasets or systematic experimental validation...
Operators and regulators should prioritize independent model audits, disclosure of data use, fairness/error rates, and field experiments to quantify causal impacts and heterogeneous effects.
Policy recommendations and research priorities summarized in the review based on identified methodological and governance gaps.
high null result Deep technologies and safer gambling: A systematic review. policy/research actions recommended (qualitative)
Research gaps include the need for robust causal evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies.
Review's recommendations and identified gaps, noting scarcity of RCTs/longitudinal work and calls for standardized outcomes and fairness checks.
high null result Deep technologies and safer gambling: A systematic review. presence of causal evaluations, standardized metrics, transparency and fairness ...
Heterogeneous study designs, outcomes, and measures across the literature hinder quantitative meta‑analysis and synthesis of effectiveness.
Review states heterogeneity of designs and outcome measures as a limitation preventing meta‑analysis.
high null result Deep technologies and safer gambling: A systematic review. heterogeneity of study designs and outcome measures (qualitative / count of disp...
Typical data used in studies are platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and in some cases limited self‑report measures.
Review summary of data sources across included studies listing platform logs and metadata as primary inputs to algorithms.
high null result Deep technologies and safer gambling: A systematic review. data types employed in models (behavioral log variables, account metadata, self‑...
Evaluation approaches in the reviewed literature varied widely, with many studies using retrospective accuracy metrics (AUC, precision/recall) rather than causal impact measures on harm reduction.
Methods synthesis in review: prevalence of supervised/unsupervised ML with retrospective performance reporting; few RCTs or field experiments reported.
high null result Deep technologies and safer gambling: A systematic review. type of evaluation used (retrospective predictive metrics vs causal designs)