The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2320 claims)

Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 373 105 59 439 984
Governance & Regulation 366 172 115 55 718
Research Productivity 237 95 34 294 664
Organizational Efficiency 364 82 62 34 545
Technology Adoption Rate 293 118 66 30 511
Firm Productivity 274 33 68 10 390
AI Safety & Ethics 117 178 44 24 365
Output Quality 231 61 23 25 340
Market Structure 107 123 85 14 334
Decision Quality 158 68 33 17 279
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 88 31 38 9 166
Firm Revenue 96 34 22 152
Innovation Output 105 12 21 11 150
Consumer Welfare 68 29 35 7 139
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 71 10 29 6 116
Worker Satisfaction 46 38 12 9 105
Error Rate 42 47 6 95
Training Effectiveness 55 12 11 16 94
Task Completion Time 76 5 4 2 87
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 16 9 5 48
Job Displacement 5 29 12 46
Social Protection 19 8 6 1 34
Developer Productivity 27 2 3 1 33
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 8 4 9 21
Clear
Innovation Remove filter
The set of loss functions for which classical evaluation is possible includes expectation-based losses, kernel/MMD-like objectives, and other standard generative-model criteria (a broad loss-function scope).
Theoretical coverage and examples in the paper enumerating loss families (expectations, MMD, certain divergences) and showing how the classical-approximation results apply to each. The claim is supported by derivations and examples provided in the text.
high positive Universality of Classically Trainable, Quantum-Deployed Boso... scope of loss functions for which classical evaluation/approximation is feasible
A wide class of loss functions (including expectation-based losses and kernel/MMD-style objectives) and their gradients can be evaluated or efficiently approximated on a classical computer for BSBMs using recent classical-approximation results for expectation values in linear optics.
Theoretical argument in the paper leveraging recent classical-approximation results for expectation values in linear optics; covers expectation-based losses and kernel/MMD-like divergences and provides constructions/complexity statements showing efficient classical evaluation/approximation of these losses and, in many cases, their gradients. (The claim is based on proofs/derivations rather than empirical data.)
high positive Universality of Classically Trainable, Quantum-Deployed Boso... classical computability/approximation of loss values and gradients (time/complex...
The paper proposes specific operational and market recommendations: firms should invest in middleware and co-design partnerships; policymakers should fund shared QCSC infrastructure and workforce programs; researchers should prioritize interoperable middleware, scheduling models, and economic experiments on access-pricing.
Explicit recommendations section synthesizing prior architectural and economic analysis; prescriptive assertions based on conceptual arguments rather than experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer adoption of recommended investments/policies and their effect on access, standar...
Middleware standardization and interoperable APIs reduce switching costs and foster competition; lack of standards risks vendor lock-in and higher long-run costs.
Economic and systems-design argument drawing on well-understood effects of standardization in software ecosystems; no empirical QCSC-standardization case studies provided.
high positive Reference Architecture of a Quantum-Centric Supercomputer switching costs, level of competition, interoperability across QCSC offerings
QCSC reference architecture elements — e.g., QPU integration patterns, low-latency interconnects, orchestration and scheduling middleware, unified programming environments, data staging strategies — are required components to address current friction.
System decomposition and interface requirements derived from use-case analysis; proposed architecture components listed and motivated; no experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer presence/absence of specific architecture components and their theorized effect ...
Models and systems must include robust governance: transparency, explainability, provenance logging, versioning, and compliance checks to maintain trust and satisfy auditors/regulators.
Normative claim supported by recommended governance and evaluation practices described in the paper; no regulatory testing or audit case studies reported.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... governance/compliance indicators (e.g., presence of explainability reports, audi...
Cloud and distributed compute (data lakes, distributed training, streaming pipelines) provide the scalability needed to handle growing data and model complexity in financial analytics.
Technical claim supported by proposed infrastructure components in the paper; no benchmarking or capacity measurements provided.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... scalability measures (e.g., throughput, latency under load, time to train models...
Such frameworks—designed to be modular, scalable, and interoperable—enable pluggable AI modules (scenario analysis, cash‑flow forecasting, dynamic pricing) and easier integration with ERP/BI systems.
Architectural claim supported by system design principles listed in the paper (modular model repositories, model-serving layers, feature stores, API integration); presented as design best-practices rather than empirical validation.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... system integration metrics (e.g., number of pluggable modules, integration time,...
Overinvestment increases inequality (greater tail concentration of income).
Model computations showing that exponential returns amplify income at the top; comparative statics indicate inequality measures rise with greater investment/technology under lognormal wage assumption.
high positive Janus-Faced Technological Progress and the Arms Race in the ... income inequality (tail concentration measures/Gini-like outcomes)
Overinvestment increases measured GDP (output).
Comparative statics in the theoretical model linking higher private investment/technology adoption to higher aggregate output; model shows positive effect on measured GDP despite welfare loss possibilities.
The exponential returns to skill and technology create strong private incentives for agents to escalate skill (education) investment toward the high tail of the distribution (an educational arms race).
Equilibrium analysis and comparative statics in the theoretical model showing that marginal returns to additional investment are increasing toward the distribution tail, producing higher optimal private investment at the top relative to social optimum.
high positive Janus-Faced Technological Progress and the Arms Race in the ... individual education/skill investment level
When wages follow a lognormal distribution, technological progress makes wages increase exponentially in both skill and technology.
Analytical derivation in the paper's economic model that assumes a lognormal wage distribution and specifies wages as an exponential function of skill and a technology parameter; result follows from model algebra (no empirical data).
Concrete legislative recommendations include amendments to the EU AI Act, Consumer Rights Directive, and Digital Services Act to operationalize model-level transparency and user choice rights.
Policy design: drafted candidate amendments tailored to existing EU instruments presented in the paper.
high positive The Global Landscape of Environmental AI Regulation: From th... proposed textual amendments to specified EU legislative instruments (existence o...
All data are openly available at https://www.antscan.info.
Explicit statement of public repository/portal and URL provided in the paper.
high positive High-throughput phenomics of global ant biodiversity data accessibility (public availability and repository URL)
The dataset includes metadata such as taxonomic labels, collection/locality data, and links to genome projects where available.
Paper states dataset contents include whole-body volumes/meshes and associated metadata (taxonomic labels, locality, genome links).
high positive High-throughput phenomics of global ant biodiversity presence and type of metadata fields associated with scans
The scanning pipeline was optimized and standardized to enable digitizing hundreds to thousands of specimens.
Authors describe an optimized, standardized pipeline and cite the achieved output (2,193 scans) as demonstration.
high positive High-throughput phenomics of global ant biodiversity pipeline throughput/scale (hundreds–thousands of specimens)
The project demonstrated a high-throughput application of synchrotron X-ray microtomography for whole-organism digitization at scale.
Combination of method (synchrotron microCT), standardized pipeline, and production of 2,193 scans presented as evidence of high-throughput capability.
high positive High-throughput phenomics of global ant biodiversity throughput of whole-organism digitization (number of scans produced using the pi...
Imaging modality used is synchrotron X-ray microtomography (high-resolution 3D imaging).
Method section details use of synchrotron X-ray microtomography for whole-body imaging.
high positive High-throughput phenomics of global ant biodiversity imaging modality applied
Scans were acquired with standardized parameters to facilitate automated and replicable analysis and benchmarking.
Paper describes a standardized acquisition protocol and pipeline (synchrotron X-ray microtomography) and notes standardized parameters and metadata format.
high positive High-throughput phenomics of global ant biodiversity use of standardized scanning parameters and metadata format
The dataset covers taxonomic breadth of 212 genera and 792 species.
Reported counts of taxa included in the dataset as stated in the paper.
high positive High-throughput phenomics of global ant biodiversity taxonomic coverage (genera and species counts)
The Antscan project produced 2,193 whole-body 3D ant datasets (scans).
Reported dataset size in the paper: 2,193 whole-body 3D volumes/meshes produced via the described scanning pipeline.
high positive High-throughput phenomics of global ant biodiversity number of whole-body 3D ant scans (2,193)
The United States manages the openness–security trade-off via a decentralized, rights‑based coordination emphasizing procedural transparency and public accountability.
Qualitative content analysis of national‑level policy texts: 18 U.S. policy documents coded across the same four analytical dimensions.
high positive Balancing openness and security in scientific data governanc... governance logic / institutional coordination type (decentralized, rights‑based)
If companies are treated as recipients, they would be required to comply with nondiscrimination obligations (e.g., Title VI, Title IX, Section 504) in education contexts and may be subject to enforcement actions, corrective requirements, and private suits where applicable.
Interpretation of recipient obligations under existing civil‑rights statutes and enforcement mechanisms; doctrinal analysis and illustrative case law.
high positive Civil Rights and the EdTech Revolution scope of compliance and enforcement obligations imposed on vendors
Systems biology, constraint‑based metabolic modeling (e.g., FBA), kinetic modeling, and hybrid models are effective tools to predict fluxes and identify metabolic bottlenecks.
Discussion and aggregation of modeling studies using COBRA/OptFlux frameworks, FBA simulations, and kinetic/dynamic modeling applied to engineered strains to predict flux changes and suggest genetic interventions; validated in multiple reported DBTL cycles.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... accuracy/usefulness of flux predictions and identification of bottlenecks leadin...
Engineered microorganisms are maturing into modular, programmable “microbial factories” capable of producing complex chemicals, specialty compounds, and next‑generation biofuels.
Synthesis of multiple experimental case studies reported in the literature (bench and pilot scale fermentations) demonstrating microbial production of natural products, specialty chemicals, and biofuel molecules using engineered strains and heterologous pathways; methods include pathway assembly, enzyme engineering, and fermentation optimization.
high positive Harnessing Microbial Factories: Biotechnology at the Edge of... demonstrated ability to produce target complex molecules (presence/identity of p...
Cluster-level interpretation can be performed via LLM-based semantic decoding to generate concise human-readable labels and descriptions for discovered themes.
Pipeline step implemented: use of an LLM to decode cluster content and produce labels/descriptions; reported in experimental workflow on ICML and ACL abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... quality of cluster labels / human-readability of cluster descriptions
Normalized representations can be embedded into a continuous vector space and then clustered using density-based clustering to identify latent themes without pre-specifying the number of topics.
Methodological pipeline: embedding model applied to normalized representations followed by density-based clustering (algorithmic property: density-based methods do not require pre-specified cluster count). Demonstrated in experiments on ICML and ACL 2025 abstracts.
high positive Soft-Prompted Semantic Normalization for Unsupervised Analys... latent theme detection (cluster discovery) without predefining cluster count
A research agenda prioritizing empirical evaluation, model transparency, and rigorous impact assessment is required to translate conceptual promise into measurable public value.
Explicit recommendation in the blurb identifying research priorities; not an empirical claim but a proposed course of action.
high positive Governing The Future existence and uptake of empirical evaluations, transparency practices, and rigor...
Illustrative vignettes show AI in action: logistics optimization for trade, AI models for national fiscal decision-making, and algorithmic job-acceleration for individual labor market navigation.
Reference to specific case vignettes contained in the book; these are illustrative scenarios rather than empirical case studies with measured outcomes.
high positive Governing The Future demonstrated feasibility of AI applications in logistics, fiscal decision-making...
Ten defining policy questions structure the book’s approach, turning abstract AI capabilities into operational policy choices.
Descriptive claim about the book's organization; verifiable by inspecting the book's table of contents (no external empirical data).
high positive Governing The Future existence and use of ten policy questions as an organizing framework
The compendium issues specific policy-design recommendations for economic policymakers: deploy proportional compliance obligations and regulatory sandboxes, subsidize or certify third‑party auditors, monitor credit availability and pricing post‑implementation, and coordinate cross‑border standards.
Explicit policy recommendations listed in the "Policy design recommendations" subsection; derived from the paper's interdisciplinary analysis.
high positive Diego Saucedo Portillo Sauceport Research adoption of recommended policy tools (proportional obligations, sandboxes, audit...
The protocol has been prepared/indexed across 15 strategic languages to facilitate international diffusion and comparative uptake.
Stated multilingual/global indexing claim in the compendium (15 languages).
high positive Diego Saucedo Portillo Sauceport Research number of languages in which the protocol is indexed (15)
The paper implements a "White Box" regulatory protocol for AI in Mexico's financial sector requiring algorithmic transparency, auditability, explainability, and non‑discrimination standards for credit/FinTech algorithms.
Output of the technical protocol described in the compendium; developed from a forensic audit of source materials and legal-methodological synthesis (doctrinal/comparative analysis).
high positive Diego Saucedo Portillo Sauceport Research presence and breadth of mandated transparency/auditability/explainability/non‑di...
The compendium proposes recognizing "Digital Sovereignty" as a new fundamental human right that protects individuals’ autonomy, data sovereignty, due process, and non-discrimination in algorithmic financial decision‑making.
Normative definitional claim in the protocol; grounded in the author's doctrinal and comparative legal analysis across 12 years (2014–2026).
high positive Diego Saucedo Portillo Sauceport Research legal recognition/status of a new fundamental right ("Digital Sovereignty") and ...
Recommended policy approach: run pilots to empirically measure trade‑offs, combine obligations with capacity building (technical assistance, shared datasets, sandboxes), harmonize with international frameworks, and use staged implementation with cost‑benefit analyses.
Policy recommendations derived from the compendium’s interdisciplinary synthesis and economic/policy analysis (prescriptive, not empirically validated within the paper).
high positive Diego Saucedo Portillo Sauceport Research existence and outcomes of pilot studies, capacity building programs, harmonizati...
Policy operationalization should include algorithmic impact assessments, audit logs, disclosure regimes to regulators/judiciary, redress/grievance mechanisms, and governance principles (open, transparent, accountable).
Prescriptive policy instruments and standards proposed in the compendium based on the forensic audit and normative design work; descriptive claim about the protocol’s recommended instruments.
high positive Diego Saucedo Portillo Sauceport Research presence/adoption of specified regulatory instruments (impact assessments, audit...
Combining secure aggregation and differential privacy can materially reduce centralized custody risks.
Conceptual systems design and analytical discussion combining cryptographic and statistical privacy mechanisms; threat model argues joint effect reduces reconstruction and limits leakage. No field measurements of residual risk provided.
high positive Privacy-Aware AI Advertising Systems: A Federated Learning F... reduction in centralized custody risk and information leakage metrics
Secure aggregation protocols (cryptographic aggregation, MPC) can prevent reconstruction of individual updates and thus materially reduce risk of exposing raw behavioral logs to centralized custodians.
Systems design and threat modeling mapping secure aggregation techniques to privacy risk reduction; references to standard cryptographic protocols. Empirical support limited to conceptual mapping and prototype/simulation; no deployment measurements.
high positive Privacy-Aware AI Advertising Systems: A Federated Learning F... risk of reconstruction/inference of individual data from transmitted updates
Model training can occur locally on devices/publishers/advertiser endpoints such that only model updates (not raw behavior logs) are shared and aggregated to produce cross-platform personalization.
Architectural description and conceptual design of a federated advertising paradigm (multi-layer architecture); prototype/simulation examples illustrating update-only aggregation. No real-world deployment data.
high positive Privacy-Aware AI Advertising Systems: A Federated Learning F... data custody locus (raw data retained locally vs. centralized), feasibility of c...
The positive effect of digital rural development on AGTFP is robust to alternative variable constructions, sample adjustments, and endogeneity treatments (e.g., instrumental-variable/other methods).
Robustness exercises reported in the paper: re-specification of the digitalization measure, re-sampling/alternative sample specifications, and use of instrumental/other methods to address endogeneity.
Digital rural development in China significantly increases agricultural green total factor productivity (AGTFP).
Fixed-effects panel regression using provincial panel data for 30 Chinese provinces from 2012–2022 (≈330 province-year observations), with reported significance and robustness checks (alternative measures, sample adjustments, and endogeneity tests).
high positive Digital rural development and agricultural green total facto... Agricultural green total factor productivity (AGTFP)
VIS produces interrelated metrics that explicitly include indirect labor embodied throughout the supply chain rather than only direct labor employed in a reported sector.
Computation of vertically integrated sector vectors from input–output matrices and allocation of upstream labor inputs to final-sector output; reported construction of VIS-based labor input metrics.
high positive Measuring labor productivity dynamics in U.S. industrial and... VIS labor input metrics (direct + indirect labor embodied per final-sector outpu...
Adapting Pasinetti’s vertically integrated sectors framework enables production of time-series productivity measures at the subsystem level.
Methodological adaptation described and applied to annual data (2014–2023) to produce VIS-based productivity time series for subsystems (e.g., electric generation subsystem).
high positive Measuring labor productivity dynamics in U.S. industrial and... time-series labor productivity metrics at the subsystem (VIS) level
The VIS approach captures both direct and indirect (upstream) labor effects by attributing upstream labor requirements to final-sector outputs using Leontief-type inverses / vertically integrated sector vectors.
Methodology constructs annual input–output matrices (BEA + IMPLAN mapping) and computes Leontief-type inverses/vertically integrated sector vectors to allocate direct and indirect requirements; upstream labor is attributed to final output using BLS employment/hours data.
high positive Measuring labor productivity dynamics in U.S. industrial and... attribution of upstream (indirect) labor embodied per unit of final-sector outpu...
This abstraction (logical compute) helps explain both why the laws travel so well across settings and why they give rise to a persistent efficiency game in hardware, algorithms, and systems.
Paper-provided explanatory argument connecting abstraction to empirical observations of cross-setting regularity and continued efficiency-focused innovation (no numerical evidence in excerpt).
medium mixed The Unreasonable Effectiveness of Scaling Laws in AI extent of efficiency-driven innovation and cross-setting generality of scaling l...
Our results suggest that arbitrage can be a powerful force in AI model markets with implications for model development, distillation, and deployment.
Synthesis/conclusion based on the paper's empirical findings (case study, robustness experiments, distillation analysis) and economic interpretation.
medium mixed Computational Arbitrage in AI Model Markets overall economic influence of arbitrage on model development, distillation, and ...
The paper provides supporting empirical evidence spanning frontier laboratory dynamics, post-training alignment evolution, and the rise of sovereign AI as a geopolitical selection pressure.
Empirical/observational sections in the paper that the authors state cover those three areas (specific datasets, experiments, or case studies are referenced in the text but not quantified in the abstract).
medium mixed Punctuated Equilibria in Artificial Intelligence: The Instit... empirical patterns consistent with the institutional fitness and punctuated-equi...
There are architectural tensions between actor-critic frameworks and value-based methods in DRL for finance, and state-space representation and reward function engineering are important to performance in complex financial environments.
Analytical comparison and emphasis in the paper; the excerpt does not include quantitative comparisons, ablation studies, or dataset descriptions to substantiate which architectures perform better under which conditions.
medium mixed Deep Reinforcement Learning for Dynamic Portfolio Optimizati... algorithmic performance differences as a function of DRL architecture, state rep...
The paper provides an extensive system-level investigation into the deployment of DRL architectures for dynamic portfolio optimization.
Stated scope of the paper (system-level investigation); details about methods, datasets, experimental design, or sample sizes are not given in the provided text.
medium mixed Deep Reinforcement Learning for Dynamic Portfolio Optimizati... operational and performance characteristics of DRL deployments for dynamic portf...
An extended evaluation over 2024–2025 reveals market-regime dependency: the learned policy performs well in volatile conditions but shows reduced alpha in trending bull markets.
Out-of-sample robustness claim: evaluation over an extended period (calendar 2024 through 2025). The excerpt states qualitative regime-dependent performance but does not provide quantitative splits, volatility/trend definitions, sample sizes, or per-regime performance metrics.
medium mixed Can Blindfolded LLMs Still Trade? An Anonymization-First Fra... strategy alpha/performance (e.g., returns or Sharpe) conditional on market regim...