The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (3492 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Innovation Remove filter
A branched neural architecture with collapsing (aggregation) layers that reduce a dataset into permutation-invariant summaries can produce parameter estimates that are exactly finite-sample (i.e., reproduce estimator outputs at finite sample sizes).
Empirical & theoretical motivation: architecture includes collapsing/aggregation layers to implement permutation-invariance and summary reduction; simulation experiments reportedly show the network reproduces reference estimator outputs at finite sample sizes (finite-sample matching). The exact experimental settings (sample sizes, number of replications) are not specified in the summary; evidence comes from simulated benchmarks and comparisons to reference estimators.
medium positive ForwardFlow: Simulation only statistical inference using dee... match to reference estimator outputs at finite sample sizes (exact equality or n...
A single “summary network” trained in a simulation-only framework can solve the inverse problem of parameter estimation for parametric models by mapping simulated datasets to parameters (minimizing MSE).
Empirical: network trained on simulated datasets (each dataset simulated conditional on a known parameter) with a mean-squared-error (MSE) loss between predicted and true parameter; evaluated on synthetic parametric benchmark problems and a genetic-data example. Specific sample sizes and number of simulations are not stated in the provided summary; evidence is based on the reported simulation experiments and benchmark comparisons.
medium positive ForwardFlow: Simulation only statistical inference using dee... parameter estimation accuracy (MSE between predicted parameter and true paramete...
Fewer expensive evaluations translate directly to lower compute hours and therefore lower cloud/on-premise costs for computational materials or chemistry R&D.
Implication discussed in the paper's implications section: economic argument linking reduced expensive evaluations to lower compute cost; not an experimental result but an economic extrapolation based on the reported reduction in evaluations.
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... compute hours / monetary cost per scientific result
Correct application of the described elements (GP with derivatives, inverse-distance kernels, active acquisition, OT sampling, MAP regularization, trust-region control, RFF scaling) reduces the number of expensive underlying-theory (energy/force) evaluations by roughly an order of magnitude while preserving underlying-theory accuracy.
Empirical claim reported in the paper: benchmarks and experiments on representative potential energy surface problems (specific datasets and numerical results are said to be presented in the paper and accompanying code); summary states an approximately one order-of-magnitude reduction in expensive evaluations with preserved accuracy.
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... number of expensive energy/force evaluations required to reach a given accuracy ...
Random Fourier features are used to decouple hyperparameter training from prediction, yielding favorable computational scaling for high-dimensional systems.
Paper describes use of random Fourier features to approximate kernels so hyperparameter fitting can be done largely independently of prediction-time complexity; complexity/scaling claims supported by methodological argument and empirical timings in the paper/code.
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... computational scaling (training vs prediction time) in higher-dimensional config...
MAP regularization via a variance barrier plus oscillation detection prevents surrogate-induced pathologies and non-convergent search behavior.
Paper describes MAP priors (variance barrier) and oscillation-detection diagnostics as regularization and robustness measures; authors report these measures prevent instabilities in surrogate-driven searches in their experiments.
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... incidence of surrogate-induced instabilities or non-convergence in optimization ...
Using Optimal Transport (Earth Mover’s Distance) for farthest-point sampling diversifies the training points in configuration space.
Paper introduces EMD-based farthest-point sampling as an extension and reports its use in experiments; implementation described in methods and code.
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... diversity of training points sampled in configuration space (sampling distributi...
Inverse-distance kernels better capture atomic interactions in configuration space than generic kernels for these surrogate models.
Paper argues and uses inverse-distance kernel design to reflect physical interatomic distance dependence; benchmark comparisons reported in the paper (details in main text and codebase).
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... surrogate quality / predictive accuracy on atomic configurations (kernel perform...
Gaussian process (GP) surrogates that incorporate derivative observations (e.g., forces) improve the fidelity of the surrogate model and provide better local estimates of gradients and Hessians.
Paper describes GP regression with value and derivative observations used to constrain the surrogate; experiments/benchmarks reported in the paper and code demonstrate use of derivative observations in surrogate training (exact datasets and sample sizes referenced in paper/code).
medium positive Bayesian Optimization with Gaussian Processes to Accelerate ... surrogate fidelity as assessed by local gradient/Hessian accuracy and downstream...
Practical modalities exist for efficient classical estimation of gradients for the covered loss classes: using the classical-approximation machinery to compute analytic gradients or unbiased estimators, finite-difference approaches, and surrogate methods; the paper discusses sample complexity and noise considerations.
Methodological discussion in the paper outlining specific gradient estimation approaches compatible with the classical-approximation results, together with complexity/sample-complexity remarks. This is a methods/algorithmic claim supported by analysis rather than empirical benchmarks.
medium positive Universality of Classically Trainable, Quantum-Deployed Boso... efficiency/sample-complexity of gradient estimation procedures
The paper constructs a single-hyperparameter family of BSBMs that monotonically interpolates from weak expressive power up to full universality, enabling a controlled trade-off between simplicity and expressivity.
Explicit one-parameter family construction and monotonicity argument/proof in the paper showing that increasing the hyperparameter increases expressivity and approaches universality. This is a theoretical construction rather than empirical measurement.
medium positive Universality of Classically Trainable, Quantum-Deployed Boso... expressive power (as a monotone function of a single hyperparameter)
Classical hardness of exact or approximate sampling from the expanded (ancilla + postprocessing) BSBM family is preserved by relating these models to known hard linear-optical sampling tasks.
Complexity-theoretic reductions and arguments in the paper connecting the expanded BSBM constructions to established hard sampling problems in linear optics (e.g., boson sampling variants). The claim is supported by theoretical reductions rather than empirical hardness measurements.
medium positive Universality of Classically Trainable, Quantum-Deployed Boso... classical hardness of sampling (exact/approximate) from the expanded BSBM family
Universality (and therefore potential sampling hardness) can be recovered by expanding the model: adding ancillary modes and applying a constant-function postprocessing generalization restores universality while retaining efficient classical trainability.
Construction and theoretical argument in the paper: introduces ancilla modes and a constant-function postprocessing generalization (analogous to IQP-QCBM techniques), shows how these modifications increase representational power to universality, and demonstrates that the same classical-approximation machinery still allows efficient evaluation/approximation of training losses. The argument includes constructive proofs and reductions.
medium positive Universality of Classically Trainable, Quantum-Deployed Boso... generative universality and classical trainability after model expansion
Training can be done classically even when sampling from the trained BSBM is believed to be classically hard (the 'train classically, deploy quantumly' paradigm applies to BSBMs).
Argument combining two parts in the paper: (1) classical-evaluation results for losses/gradients (see above) and (2) separate hardness-of-sampling arguments showing sampling remains classically hard after training. This is a theoretical claim based on the constructions and reductions presented in the paper.
medium positive Universality of Classically Trainable, Quantum-Deployed Boso... feasibility of classical training vs. classical hardness of sampling at deployme...
Demand will grow for hybrid specialists (quantum algorithm engineers, HPC systems integrators, middleware developers) and for domain scientists fluent in hybrid workflows, shifting skill premiums toward interdisciplinary expertise.
Labor-market inference from technology adoption and the skills required by proposed QCSC systems; qualitative only, no labor-market survey data provided.
medium positive Reference Architecture of a Quantum-Centric Supercomputer demand for specific skills, wage premiums for interdisciplinary expertise
Public investment and shared facilities can mitigate entry barriers and diffuse benefits to smaller firms and research groups.
Policy analysis and precedent from shared scientific infrastructure models; no case-study data specific to QCSC presented.
medium positive Reference Architecture of a Quantum-Centric Supercomputer access to QCSC resources by small firms/research groups, reduction in entry barr...
Tightly integrating QPUs, GPUs, and CPUs across hardware, middleware, and application layers (QCSC vision) will enable high-throughput, low-latency hybrid workflows.
Architectural design reasoning and analogies to heterogeneous co-design in classical HPC; no empirical throughput/latency measurements provided.
medium positive Reference Architecture of a Quantum-Centric Supercomputer throughput and end-to-end latency of hybrid quantum-classical workflows
A phased roadmap (offload engines → middleware-coupled heterogeneous systems → fully co-designed heterogeneous systems) and a reference architecture can remove current friction (manual orchestration, scheduling, data transfer) and materially accelerate algorithmic discovery and applied quantum utility.
Roadmap and reference architecture proposed from system decomposition and use-case requirements analysis; argument based on observed friction points from literature and early hybrid deployments; no empirical validation provided.
medium positive Reference Architecture of a Quantum-Centric Supercomputer reduction in manual orchestration, scheduling overhead, data-movement latency; i...
Quantum-Centric Supercomputing (QCSC) — integrated systems co-designing QPUs with classical HPC components and middleware — is necessary to scale hybrid quantum-classical algorithms for chemistry, materials, and other applied research.
Conceptual systems-architecture analysis and synthesis of recent quantum-simulation demonstrations and hybrid algorithms; use-case-driven analysis for chemistry and materials; no new empirical performance benchmarks presented.
medium positive Reference Architecture of a Quantum-Centric Supercomputer scalability and practicability of hybrid quantum-classical algorithm execution (...
DPS compares favorably to standard rollout-based prompt-selection baselines across the reported metrics (rollouts required, training speed, final accuracy).
Empirical comparisons against baseline methods reported in the experiments; specific numeric comparisons and statistical details are not present in the provided summary.
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... relative performance vs baseline on number of rollouts, training speed, and fina...
DPS creates a predictive prior that identifies informative prompts without performing exhaustive rollouts over large candidate batches.
Methodological mechanism plus empirical claim that selection operates via predictive prior and reduces candidate rollouts; supported by experiments vs rollout-filtering baselines.
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... informativeness of selected prompts (as implied by downstream learning gains and...
The DPS inference procedure requires only historical rollout reward signals and therefore adds only a small amount of extra compute compared to the rollouts it avoids.
Practical considerations described in the paper: inference uses past rollout rewards; authors state the extra compute is small relative to avoided rollouts. (No quantified compute-cost ratio in the summary.)
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... additional inference compute relative to avoided rollout compute
DPS improves final reasoning performance (final task accuracy) across evaluated domains: mathematical reasoning, planning, and visual-geometry tasks.
Empirical results reported across those benchmark domains showing improved downstream reasoning accuracy relative to baselines. (Summary does not include exact effect sizes or sample counts.)
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... final reasoning accuracy on benchmarks (mathematics, planning, visual-geometry)
DPS speeds up RL finetuning in terms of required rollout budgets and wall-clock rollout compute.
Reported empirical findings: faster convergence of RL finetuning measured by rollout budgets and wall-clock compute on evaluated tasks. (Exact runtime metrics and sample sizes not provided in the summary.)
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... training speed (rollout budget to convergence; wall-clock rollout compute)
Compared to standard online prompt-selection methods that rely on large candidate-batch rollouts for filtering, DPS substantially reduces the number of redundant (uninformative) rollouts.
Empirical comparisons against rollout-based filtering baselines across benchmark tasks (mathematics, planning, visual-geometry). Specific numeric savings not provided in the summary.
medium positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... number of rollouts (redundant rollouts avoided)
Firms will reallocate investment toward cloud infrastructure, data engineering, model ops, and financial data integration, favoring vendors providing interoperable, audit-friendly solutions.
Predictive claim about investment incentives based on the paper's architectural and governance analysis; no spending data or vendor market-share evidence presented.
medium positive Next-Generation Financial Analytics Frameworks for AI-Enable... IT/technology spend composition (e.g., percent of budget on cloud/data engineeri...
Next-generation financial analytics frameworks embed AI (ML, NLP, anomaly detection) into core financial systems to shift enterprises from retrospective reporting to predictive, prescriptive, and real-time decision-making.
This is the paper's central conceptual claim supported by a descriptive synthesis of AI techniques and system architecture; no empirical sample, controlled experiments, or deployment case data are presented—recommendations are justified by logical argument and examples of techniques.
medium positive Next-Generation Financial Analytics Frameworks for AI-Enable... degree of shift from retrospective reporting to predictive/prescriptive/real-tim...
Manufacturing and services are likelier than extractive industries to generate broader employment and skill spillovers.
Sectoral comparisons from empirical literature synthesized in the review indicating stronger local linkages and skill spillovers in manufacturing and many services; evidence heterogeneous across countries and subsectors.
medium positive Foreign Direct Investment, Labor Markets, and Income Distrib... employment breadth, skill spillovers, local supplier development
FDI can raise productivity and foster skills through technology transfer, improved management practices, and competition.
Cross-study empirical results and theoretical mechanisms summarized in the review (firm-level productivity studies and spillover literature); underlying studies vary in scope and identification.
medium positive Foreign Direct Investment, Labor Markets, and Income Distrib... firm productivity, worker skills, wages
FDI can generate jobs via firm entry and expansion.
Synthesis of micro- and firm-level empirical studies reported in the review indicating job creation associated with foreign-owned firm entry and expansion; evidence heterogeneous by sector and country (sample sizes and methods vary by underlying studies).
medium positive Foreign Direct Investment, Labor Markets, and Income Distrib... employment (jobs created at firm and sector levels)
The paper makes testable empirical predictions: sectors with exponential returns to skill/AI should exhibit larger increases in inequality and private investment intensity, and firm-level investments should cluster at borrowing limits.
Derived empirical implications from the theoretical model; the paper suggests strategies for empirical testing (fit wage distributions, measure tail returns, use firm-level credit/investment data, exploit technology shocks) but reports no empirical tests.
medium positive Janus-Faced Technological Progress and the Arms Race in the ... sectoral inequality changes, private investment intensity, distribution of firm-...
Borrowing constraints matter: they can be the binding limit on investment when private incentives push to extreme (corner) investment levels.
Model includes borrowing constraints; equilibrium characterization demonstrates cases where the borrowing constraint binds and determines the chosen investment level (credit-limited corner solutions).
medium positive Janus-Faced Technological Progress and the Arms Race in the ... incidence/bindingness of borrowing constraints on investment
In the firm interpretation, firms race to deploy more capable AI/chatbots and frequently choose corner investment solutions constrained only by borrowing limits.
Model variant mapping individual skill investment to firm R&D/AI-capital choice; equilibrium solutions computed in the model show optimal firm investment often hits upper bounds set by borrowing constraints.
medium positive Janus-Faced Technological Progress and the Arms Race in the ... firm-level AI/R&D investment (incidence of corner/binding investment choices)
Policy design should be adaptive and sector-sensitive, balancing innovation with safeguards while targeting skills, infrastructure, and inclusive finance to maximize social returns from SME AI adoption.
Policy recommendations derived from the literature review and identified cross-cutting barriers/enablers; these are prescriptive rather than empirically validated within the review.
medium positive Artificial Intelligence Adoption for Sustainable Development... effectiveness of policy interventions; inclusive AI adoption metrics
Innovative financing (blended finance, pay-per-use, outcome-linked financing) is critical to overcome upfront cost barriers and enable scalable, risk-sharing investments in AI for SMEs.
Policy reports and selective case studies in the review demonstrating these instruments can facilitate uptake; systematic evidence on scalability and impact remains limited.
medium positive Artificial Intelligence Adoption for Sustainable Development... uptake of innovative financing instruments; AI investment levels by SMEs
Developing pragmatic, locally appropriate data governance arrangements (standards, privacy safeguards, data trusts) is necessary to build trust and enable SME participation in data-driven markets.
Policy literature and governance proposals reviewed; examples of data-governance models (e.g., data trusts, federated learning) discussed, but empirical evaluations in LMIC SME contexts are scarce.
medium positive Artificial Intelligence Adoption for Sustainable Development... trust in data sharing; interoperability; SME engagement in data ecosystems
Implementing scalable financing and procurement models (pay-as-you-go, leasing, blended finance) can overcome upfront cost barriers for SMEs adopting AI.
Policy and finance reports and a small number of case examples cited in the review showing such instruments enabling technology uptake; systematic evidence on effect sizes is limited.
medium positive Artificial Intelligence Adoption for Sustainable Development... use of alternative financing models; reduction in financing barriers; AI adoptio...
Strengthening ecosystem linkages among academia, tech providers, financiers, and regulators enhances the prospects for inclusive, scalable AI adoption by SMEs.
Case studies and ecosystem analyses in the reviewed literature that document positive roles for partnerships and coordinated support; evidence is descriptive and context-dependent.
medium positive Artificial Intelligence Adoption for Sustainable Development... ecosystem connectivity; number of collaborative projects; SME AI uptake
Incremental investment in human capital and development of dynamic capabilities (learning, adaptation) increases SMEs’ absorptive capacity and the likelihood of successful AI adoption.
Theoretical grounding in RBV and DC literature combined with illustrative case evidence from the review showing firms with stronger learning capabilities tend to adopt and benefit more from technology.
medium positive Artificial Intelligence Adoption for Sustainable Development... absorptive capacity metrics; successful AI adoption; firm performance post-adopt...
A phased adoption approach (assess needs → pilot low-risk use cases → scale modularly) is recommended to reduce risk and improve outcomes for SME AI projects.
Synthesis of best-practice guidance and pragmatic recommendations from case studies and policy literature; not empirically validated as a universal causal strategy in LMIC SMEs within the review.
medium positive Artificial Intelligence Adoption for Sustainable Development... success rate of AI pilots; scalability of deployments; mitigation of adoption ri...
External market pressures and customer demand often drive AI adoption decisions in SMEs.
Surveys and market analyses from the literature indicating demand-side pressures as adoption triggers; evidence mainly observational.
medium positive Artificial Intelligence Adoption for Sustainable Development... reported adoption triggers; AI adoption frequency linked to customer/market sign...
Access to finance, including scalable and blended financing models, is a key enabler for SME AI adoption.
Policy reports, case studies and financial analyses discussed in the review that identify financing availability and instrument design as central constraints/enablers; evidence is descriptive and context-dependent.
medium positive Artificial Intelligence Adoption for Sustainable Development... availability of tailored financing; uptake of AI investments by SMEs
Local innovation ecosystems (universities, incubators, private-sector partnerships) support SME uptake of AI.
Case studies and ecosystem analyses in the reviewed literature documenting successful university–industry linkages and incubator support facilitating technology transfer and skills development.
medium positive Artificial Intelligence Adoption for Sustainable Development... formation of partnerships; technology transfer occurrences; AI adoption among SM...
Supportive government policy and adaptive regulation are important enablers of AI adoption among SMEs.
Synthesis of policy reports and governance literature included in the review identifying regulatory clarity and supportive policy as common enabling factors.
medium positive Artificial Intelligence Adoption for Sustainable Development... AI adoption rate; regulatory environment quality
AI can improve market access for SMEs (e.g., via digital platforms and AI-enabled credit scoring) and enable potential value-chain upgrading.
Policy analyses and case-study evidence showing digital platforms and algorithmic credit assessment opening opportunities for SMEs; examples referenced from Botswana and similar LMIC contexts.
medium positive Artificial Intelligence Adoption for Sustainable Development... market access indicators (platform participation, sales channels); access to fin...
AI adoption supports new product/service innovation and faster time-to-market for SMEs.
Qualitative case studies and practitioner reports cited in the review showing instances of AI assisting R&D, prototyping, and launch processes; limited systematic quantitative measurement across sectors.
medium positive Artificial Intelligence Adoption for Sustainable Development... number of new products/services; time-to-market (development cycle duration)
AI-enabled customer segmentation and personalization can increase sales and customer retention for SMEs.
Empirical examples and case studies from the literature and policy reports documenting improved targeting and retention in firms that adopted AI tools; evidence is largely observational and context-specific.
medium positive Artificial Intelligence Adoption for Sustainable Development... sales revenue; customer retention rates; conversion metrics
AI can generate productivity gains for SMEs through automation and process optimization.
Multiple case studies and firm-level surveys reported in the literature showing examples of automation-related efficiency improvements; no large-scale randomized or causal studies cited that uniformly quantify effect sizes across LMIC SMEs.
medium positive Artificial Intelligence Adoption for Sustainable Development... productivity (e.g., output per worker, process cycle times, operational efficien...
Environmental-performance labeling and user opt-outs could create demand for 'eco-optimized' models and influence competition among providers.
Market analysis in implications section (theoretical consumer preference/differentiation effects).
medium positive The Global Landscape of Environmental AI Regulation: From th... market demand for eco-optimized models (consumer uptake, market share shifts)
Mandatory inference benchmarks and public reporting would create market and regulatory incentives to optimize models for energy efficiency (e.g., compression, routing, edge inference).
Policy implications / market design analysis describing likely provider responses to benchmarking and public reporting.
medium positive The Global Landscape of Environmental AI Regulation: From th... adoption of energy-efficiency techniques (rate of model compression, routing, ed...