Evidence (7448 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
RATs make readable and potentially quantifiable the preparatory interpretive work that contributes to downstream outputs, with implications for labor accounting and human capital valuation.
Theoretical economic and policy discussion in the paper; no empirical measurement or case studies provided to quantify how much preparatory work is captured or its economic value.
RATs can enable collective sensemaking via shared trails and networked associations among readers.
Conceptual argument and suggested network-analysis methods; illustrated with the speculative WikiRAT use case. No group-level empirical studies reported.
RATs can support richer reader models (personalization and modeling of interpretive behavior) through sequence analysis, embedding/clustering of trajectories, and other analytic techniques.
Proposed analytical methods (sequence analysis, embedding/clustering, network analysis) listed in the paper; no implementation results or quantitative evaluations provided.
RATs enable reflective practice by helping readers see and revise their own processes.
Proposed affordance in the paper based on the inspectable nature of RATs and the WikiRAT illustration; suggested as a potential use case rather than empirically demonstrated.
RATs treat reading as a dual kind of creation: (a) creative input work that shapes future artifacts, and (b) a form of creation whose traces are valuable artifacts themselves.
Theoretical proposal and design rationale presented in the paper; illustrated via a speculative prototype (WikiRAT). No empirical validation provided.
Reading Activity Traces (RATs) reconceptualize reading — including navigation, interpretation, and curation across interconnected sources — as creative labor.
Conceptual argument in the paper; supported by theoretical framing and literature review rather than empirical data. No sample size or deployment reported.
The method lowers the technical barrier for adopting surrogates in economics by removing dependence on specialized Bayesian neural-network techniques while preserving rigorous uncertainty quantification.
Argument in Implications section: decoupling uncertainty quantification from network architecture allows use of deterministic NNs with MCMC-sampled parameter inputs; no user-study or adoption metrics provided.
The theoretical diagnostic (linking distribution mismatch to performance loss) gives practitioners a practical tool to detect when a surrogate trained on one parameter distribution will underperform after recalibration or policy changes.
Paper-provided theoretical result and suggested diagnostic use; empirical validation of the diagnostic is implied but not detailed in the summary.
This approach dramatically reduces computation (training and/or evaluation wall-clock time) compared to approaches that sample network weights (Bayesian NNs) or exhaustively explore parameter grids.
Computational evaluation reported in the paper includes empirical examples demonstrating substantial reductions in wall-clock training/evaluation time relative to weight-sampling or exhaustive-parameter-grid baselines (exact datasets, runtimes, and sample sizes not detailed in the summary).
Training a deterministic neural surrogate conditioned on MCMC-drawn parameter samples reproduces the original (forward) model's uncertainty quantification while avoiding embedding parametric uncertainty inside the network weights.
Methodological description: surrogate is a deterministic NN whose inputs include parameter vectors drawn by MCMC from the model-parameter posterior; uncertainty is recovered by repeatedly evaluating the trained surrogate on those MCMC draws. Empirical examples are reported (details not provided here) showing reproduction of model uncertainty.
The proposed pipeline (CFD -> CFM -> CFR) forms a closed loop that can assess and improve color fidelity in T2I systems.
Paper describes end-to-end workflow: CFD provides training/validation labels for CFM; CFM produces scores and attention maps for evaluation and localization; CFR consumes CFM attention during generation to refine images. The repository contains code implementing the pipeline.
Color Fidelity Refinement (CFR) is a training-free inference-time procedure that uses CFM attention maps to adaptively modulate spatial-temporal guidance scales during generation, thereby improving color authenticity of realistic-style T2I outputs without retraining the base model.
Method description in paper: CFR uses CFM's learned attention to identify low-fidelity regions and adapt guidance strength across space and denoising steps (spatial-temporal guidance). The authors evaluate CFR on existing T2I models and report improved perceived color authenticity; no retraining of base T2I models is required (implementation and code available in the repository).
CFM aligns better with objective color realism judgments than existing preference-trained metrics and human ratings that favor vividness.
Empirical comparisons reported in the paper: CFM scoring shows improved alignment with CFD-based color-realism labels and with evaluation criteria that prioritize photographic fidelity, outperforming preference-trained metrics and the biased patterns in human ratings (paper reports both qualitative and quantitative gains; specific numerical improvements and test set sizes are provided in the paper/repo).
The Color Fidelity Metric (CFM) is a multimodal encoder–based metric trained on CFD to predict human-consistent judgments of color fidelity and to produce spatial attention maps that localize color-fidelity errors.
Model architecture and training procedure described: a multimodal encoder trained using CFD's ordered realism labels to output scalar fidelity scores and spatial attention maps indicating where color fidelity issues occur. Training supervision comes from CFD's ordered labels (paper includes training/validation procedures; exact training dataset splits are in the paper/repo).
Varying sample size, injecting contaminated data, and including algorithm-reconstruction tasks during training allow networks to automatically inherit those properties (e.g., multi-n behavior, robustness, algorithmic outputs).
Empirical: training regimes described include varying dataset size n, contaminated simulations, and algorithm-reconstruction tasks; experiments reportedly show networks trained with these variations exhibit corresponding behaviors at test time. Specific experimental details (ranges of n, contamination levels) are not included in the summary.
Collapsing (aggregation) layers mimic reduction to sufficient statistics and enforce the desirable structure for set-valued (permutation-invariant) inputs.
Theoretical/design claim supported by architectural description and motivation: collapsing layers aggregate across observations to produce summaries, enforcing permutation invariance; supported indirectly by empirical success in simulations. This is primarily an architectural/representational argument rather than a purely empirical result.
The network can learn to approximate the outputs of iterative estimation algorithms (demonstrated by learning an EM algorithm for a genetic-data estimation task).
Empirical: a genetic-data example where the network was trained (including an algorithm-reconstruction task) to approximate the EM algorithm outputs; evaluation shows qualitative/quantitative match to the iterative algorithm. Evidence is from reported experiments comparing network outputs to EM outputs (e.g., MSE between them).
Training the network with contaminated simulations yields estimators that are robust to contaminated observations at test time.
Empirical: experiments included injecting contaminated data into training simulations; evaluation measured robustness at test time under contamination and showed improved performance relative to networks not trained on contamination. Supported by reported robustness comparisons (metrics like MSE under contamination). Specific contamination rates and sample sizes are not provided in the summary.
A branched neural architecture with collapsing (aggregation) layers that reduce a dataset into permutation-invariant summaries can produce parameter estimates that are exactly finite-sample (i.e., reproduce estimator outputs at finite sample sizes).
Empirical & theoretical motivation: architecture includes collapsing/aggregation layers to implement permutation-invariance and summary reduction; simulation experiments reportedly show the network reproduces reference estimator outputs at finite sample sizes (finite-sample matching). The exact experimental settings (sample sizes, number of replications) are not specified in the summary; evidence comes from simulated benchmarks and comparisons to reference estimators.
A single “summary network” trained in a simulation-only framework can solve the inverse problem of parameter estimation for parametric models by mapping simulated datasets to parameters (minimizing MSE).
Empirical: network trained on simulated datasets (each dataset simulated conditional on a known parameter) with a mean-squared-error (MSE) loss between predicted and true parameter; evaluated on synthetic parametric benchmark problems and a genetic-data example. Specific sample sizes and number of simulations are not stated in the provided summary; evidence is based on the reported simulation experiments and benchmark comparisons.
Fewer expensive evaluations translate directly to lower compute hours and therefore lower cloud/on-premise costs for computational materials or chemistry R&D.
Implication discussed in the paper's implications section: economic argument linking reduced expensive evaluations to lower compute cost; not an experimental result but an economic extrapolation based on the reported reduction in evaluations.
Correct application of the described elements (GP with derivatives, inverse-distance kernels, active acquisition, OT sampling, MAP regularization, trust-region control, RFF scaling) reduces the number of expensive underlying-theory (energy/force) evaluations by roughly an order of magnitude while preserving underlying-theory accuracy.
Empirical claim reported in the paper: benchmarks and experiments on representative potential energy surface problems (specific datasets and numerical results are said to be presented in the paper and accompanying code); summary states an approximately one order-of-magnitude reduction in expensive evaluations with preserved accuracy.
Random Fourier features are used to decouple hyperparameter training from prediction, yielding favorable computational scaling for high-dimensional systems.
Paper describes use of random Fourier features to approximate kernels so hyperparameter fitting can be done largely independently of prediction-time complexity; complexity/scaling claims supported by methodological argument and empirical timings in the paper/code.
MAP regularization via a variance barrier plus oscillation detection prevents surrogate-induced pathologies and non-convergent search behavior.
Paper describes MAP priors (variance barrier) and oscillation-detection diagnostics as regularization and robustness measures; authors report these measures prevent instabilities in surrogate-driven searches in their experiments.
Using Optimal Transport (Earth Mover’s Distance) for farthest-point sampling diversifies the training points in configuration space.
Paper introduces EMD-based farthest-point sampling as an extension and reports its use in experiments; implementation described in methods and code.
Inverse-distance kernels better capture atomic interactions in configuration space than generic kernels for these surrogate models.
Paper argues and uses inverse-distance kernel design to reflect physical interatomic distance dependence; benchmark comparisons reported in the paper (details in main text and codebase).
Gaussian process (GP) surrogates that incorporate derivative observations (e.g., forces) improve the fidelity of the surrogate model and provide better local estimates of gradients and Hessians.
Paper describes GP regression with value and derivative observations used to constrain the surrogate; experiments/benchmarks reported in the paper and code demonstrate use of derivative observations in surrogate training (exact datasets and sample sizes referenced in paper/code).
Practical modalities exist for efficient classical estimation of gradients for the covered loss classes: using the classical-approximation machinery to compute analytic gradients or unbiased estimators, finite-difference approaches, and surrogate methods; the paper discusses sample complexity and noise considerations.
Methodological discussion in the paper outlining specific gradient estimation approaches compatible with the classical-approximation results, together with complexity/sample-complexity remarks. This is a methods/algorithmic claim supported by analysis rather than empirical benchmarks.
The paper constructs a single-hyperparameter family of BSBMs that monotonically interpolates from weak expressive power up to full universality, enabling a controlled trade-off between simplicity and expressivity.
Explicit one-parameter family construction and monotonicity argument/proof in the paper showing that increasing the hyperparameter increases expressivity and approaches universality. This is a theoretical construction rather than empirical measurement.
Classical hardness of exact or approximate sampling from the expanded (ancilla + postprocessing) BSBM family is preserved by relating these models to known hard linear-optical sampling tasks.
Complexity-theoretic reductions and arguments in the paper connecting the expanded BSBM constructions to established hard sampling problems in linear optics (e.g., boson sampling variants). The claim is supported by theoretical reductions rather than empirical hardness measurements.
Universality (and therefore potential sampling hardness) can be recovered by expanding the model: adding ancillary modes and applying a constant-function postprocessing generalization restores universality while retaining efficient classical trainability.
Construction and theoretical argument in the paper: introduces ancilla modes and a constant-function postprocessing generalization (analogous to IQP-QCBM techniques), shows how these modifications increase representational power to universality, and demonstrates that the same classical-approximation machinery still allows efficient evaluation/approximation of training losses. The argument includes constructive proofs and reductions.
Training can be done classically even when sampling from the trained BSBM is believed to be classically hard (the 'train classically, deploy quantumly' paradigm applies to BSBMs).
Argument combining two parts in the paper: (1) classical-evaluation results for losses/gradients (see above) and (2) separate hardness-of-sampling arguments showing sampling remains classically hard after training. This is a theoretical claim based on the constructions and reductions presented in the paper.
Greater ROI may come from investing in better feedback models (how to use feedback) than solely collecting richer feedback sources.
Empirical finding that feedback model choice often produced larger retrieval-quality improvements than changing the feedback source across the evaluated tasks and methods.
The study's results clarify which elements of the PRF design space are most important to prioritize in practice (i.e., prioritize feedback-model improvements over source collection in many low-resource settings).
Comparative performance gains observed in controlled experiments showing larger effect sizes from varying feedback model than from varying source, combined with cost analyses.
Across 13 low-resource BEIR tasks and five LLM PRF methods, the choice of feedback model (how feedback is applied) critically affects retrieval effectiveness.
Empirical results reported over 13 BEIR tasks using five LLM-based PRF methods, with systematic variation of feedback model.
Purely LLM-generated feedback yields the best cost-effectiveness overall (best performance per unit LLM invocation cost) for low-resource retrieval tasks.
Cost-effectiveness analysis in experiments across 13 BEIR tasks and five PRF methods that accounted for LLM invocation cost versus retrieval gains.
Feedback model choice can have a larger impact on retrieval quality than feedback source.
Controlled experiments comparing five LLM-based PRF methods across 13 low-resource BEIR tasks, measuring retrieval effectiveness with standard BEIR metrics.
Demand will grow for hybrid specialists (quantum algorithm engineers, HPC systems integrators, middleware developers) and for domain scientists fluent in hybrid workflows, shifting skill premiums toward interdisciplinary expertise.
Labor-market inference from technology adoption and the skills required by proposed QCSC systems; qualitative only, no labor-market survey data provided.
Public investment and shared facilities can mitigate entry barriers and diffuse benefits to smaller firms and research groups.
Policy analysis and precedent from shared scientific infrastructure models; no case-study data specific to QCSC presented.
Tightly integrating QPUs, GPUs, and CPUs across hardware, middleware, and application layers (QCSC vision) will enable high-throughput, low-latency hybrid workflows.
Architectural design reasoning and analogies to heterogeneous co-design in classical HPC; no empirical throughput/latency measurements provided.
A phased roadmap (offload engines → middleware-coupled heterogeneous systems → fully co-designed heterogeneous systems) and a reference architecture can remove current friction (manual orchestration, scheduling, data transfer) and materially accelerate algorithmic discovery and applied quantum utility.
Roadmap and reference architecture proposed from system decomposition and use-case requirements analysis; argument based on observed friction points from literature and early hybrid deployments; no empirical validation provided.
Quantum-Centric Supercomputing (QCSC) — integrated systems co-designing QPUs with classical HPC components and middleware — is necessary to scale hybrid quantum-classical algorithms for chemistry, materials, and other applied research.
Conceptual systems-architecture analysis and synthesis of recent quantum-simulation demonstrations and hybrid algorithms; use-case-driven analysis for chemistry and materials; no new empirical performance benchmarks presented.
Adoption of GNN-based, FL-coordinated beam management can provide competitive differentiation by offering more reliable NTN services in challenging geometries (e.g., low-elevation, edge coverage).
Synthesized implication from experimental results showing improved GNN performance at low elevation angles and the marketing/economic discussion in the paper; no market adoption or field-deployment evidence provided.
FL via HAPS reduces data-centralization costs (bandwidth and storage) and improves privacy compared to sending raw channel data to a central server.
Implication drawn from the FL design used: federated aggregation reduces need to backhaul raw channel samples; paper lists bandwidth/storage and privacy advantages as economic/operational implications (no quantified cost measurements provided).
The GNN solution is lightweight enough for practical on-board or edge deployment in NTN contexts.
Paper asserts the GNN is lightweight and suitable for on-board or HAPS/edge deployment; model described as designed to be compact for constrained compute/link budgets (no exact parameter counts provided in summary).
Federated learning across LEO orbital planes, coordinated via HAPS, enables efficient distributed beam selection for Non-Terrestrial Networks (NTNs).
Experimental design in the paper: federated learning paradigm with orbital-plane clients and HAPS acting as aggregation/coordination points; evaluated on beam-prediction tasks using realistic channel/beamforming datasets and distributed training (no central pooling of raw samples).
DPS compares favorably to standard rollout-based prompt-selection baselines across the reported metrics (rollouts required, training speed, final accuracy).
Empirical comparisons against baseline methods reported in the experiments; specific numeric comparisons and statistical details are not present in the provided summary.
DPS creates a predictive prior that identifies informative prompts without performing exhaustive rollouts over large candidate batches.
Methodological mechanism plus empirical claim that selection operates via predictive prior and reduces candidate rollouts; supported by experiments vs rollout-filtering baselines.
The DPS inference procedure requires only historical rollout reward signals and therefore adds only a small amount of extra compute compared to the rollouts it avoids.
Practical considerations described in the paper: inference uses past rollout rewards; authors state the extra compute is small relative to avoided rollouts. (No quantified compute-cost ratio in the summary.)
DPS improves final reasoning performance (final task accuracy) across evaluated domains: mathematical reasoning, planning, and visual-geometry tasks.
Empirical results reported across those benchmark domains showing improved downstream reasoning accuracy relative to baselines. (Summary does not include exact effect sizes or sample counts.)