Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
Ablation analyses show that each BATQuant component (block-wise transforms, orthogonality relaxation, GPK decomposition, block-wise clipping) contributes to robustness and efficiency.
Reported ablation studies isolating components and measuring their individual impact on performance and overhead in the paper's experiments (exact effect sizes and per-component numbers not given in the summary).
Block-wise learnable clipping suppresses residual outliers locally and contributes to robustness under aggressive MXFP4 quantization.
Method description and ablation experiments in the paper showing incremental improvement when adding block-wise learnable clipping layers versus not using them; improvements measured on benchmark metrics post-quantization.
Global and Private Kronecker (GPK) decomposition compresses transform parameters, keeping storage and runtime overhead low compared to dense per-block transforms.
Algorithmic contribution described in the paper with reported comparisons (storage/runtime overhead) versus dense per-block transform parameterizations; supported by experimental/implementation measurements (specific memory/runtime numbers not provided in the summary).
Relaxing orthogonality constraints on transforms (i.e., using non-strictly-orthogonal transforms) improves distribution shaping and better fits activations to the limited MXFP quantization range.
Design rationale and ablation studies reported in the paper showing that removing strict orthogonality yields better quantization fit and improved task metrics versus enforced orthogonal transforms.
Aligning transforms to MXFP block granularity using block-wise affine transformations prevents cross-block outlier propagation and avoids the severe collapse seen with rotation-based integer quantization techniques.
Methodological design plus ablation/empirical results in the paper showing improved activation statistics and preserved model accuracy when using block-wise affine transforms aligned to MXFP blocks versus global rotations.
Standardized runtime governance frameworks could lower per-deployment compliance engineering costs and increase diffusion of agentic systems.
Theoretical argument that standardization reduces transaction/engineering costs; suggested market dynamics; no empirical implementation evidence.
A market will develop for third-party governance tools, auditors, and insurers providing policy evaluators, risk calibration, and certification services.
Economic argument and analogy to existing markets (governance-as-a-service, insurance); no empirical evidence presented.
Benchmarking time-sensitivity (via V-DyKnow) can inform procurement decisions: buyers should assess models on their ability to handle temporally sensitive information, not just static benchmarks.
Paper's recommendations and implications section arguing for procurement practices informed by V-DyKnow evaluations.
The authors provide an operational inventory and conversation-analysis tool (the 28-code instrument) that can be reused for monitoring and mitigation by researchers, firms, and regulators.
Paper includes the codebook and describes its application as a re-usable monitoring/analysis instrument; proposed adoption discussed in implications.
This is the first empirical, message-level study of verified chatbot-related psychological-harm cases (as opposed to speculative discussion).
Authors' positioning in paper; claim of novelty based on review of prior literature and their message-level, verified-case approach.
Empirical evaluation shows the new quasi‑Newton and trust‑region methods outperform baseline sequential methods and prior parallel Newton variants in a combination of speed, memory, stability, and convergence on the tested tasks.
Reported experiments comparing the proposed algorithms to sequential baselines and prior parallel Newton approaches on representative tasks (RNNs, MCMC); qualitative summary claims faster runtimes, lower memory, and improved stability.
Trust-region methods provide stability and improved convergence reliability across tested tasks.
Empirical comparisons and algorithmic analysis showing trust-region-enabled schemes had fewer divergences and more reliable convergence than prior parallel Newton variants in the evaluated workloads.
Quasi-Newton methods deliver faster runtimes and lower memory use in experiments on RNN inference/training and MCMC chains.
Empirical experiments comparing quasi-Newton implementations to full Newton and sequential baselines on representative tasks (explicit tasks listed: RNN inference/training and MCMC chains); reported qualitative outcomes indicate speed and memory advantages.
Trust-region variants substantially improve stability and robustness, addressing divergence issues of earlier parallel Newton implementations.
Presentation of trust-region schemes adapting step sizes within the parallel Newton framework; theoretical motivation and empirical results showing reduced divergence/failure rates compared to prior parallel Newton variants.
Quasi-Newton variants are more computationally efficient and memory friendly than full Newton.
Complexity and memory analyses in the thesis plus empirical comparisons on representative tasks (RNNs, MCMC) showing lower runtime and memory usage for quasi-Newton implementations versus full Newton.
A Parallel Newton framework, implemented with a parallel associative scan, provides a natural way to parallelize computations across sequence length.
Algorithmic design combining Newton updates with a parallel associative-scan reduction; implementation details and experiments demonstrating the mechanics of the parallel scan across time steps.
Parallel Newton methods can reliably and efficiently parallelize sequential dynamical systems (e.g., RNNs, MCMC) across sequence length when reframed as nonlinear equation solves.
Thesis presents a reformulation of sequence computation as a global nonlinear system, develops parallel Newton-style algorithms, and reports empirical experiments on representative tasks (RNN inference/training and MCMC chains) comparing runtime and convergence against sequential baselines and prior parallel Newton variants.
Adopting this approach shifts required skills and organizational roles away from lengthy parametric modeling toward data engineering, controller integration, and monitoring.
Authors' discussion of practical/organizational implications (qualitative); argument based on removal of model-building step and increased emphasis on data infrastructure and online operations.
DeePC outperforms baseline controllers (e.g., fixed-time and standard adaptive schemes) in the simulated experiments.
Comparative simulation experiments reported in the paper where DeePC-controlled signals achieve superior system-level metrics relative to baseline controllers.
The method was validated on a very large, high-fidelity microscopic closed-loop simulator of Zürich; the paper reports this as the largest such closed-loop urban-traffic simulation in the literature.
Authors' description of the experimental environment: city-scale microscopic simulator of Zürich with controller in the loop; explicit statement in the paper claiming it is the largest closed-loop urban-traffic simulation reported in the literature.
Regularization and the use of measured Hankel/data matrices make the method more robust to measurement noise and limited data.
Method description includes regularization terms in the DeePC optimization and use of Hankel matrices built from measured trajectories; simulation experiments show continued performance under noisy / limited-data conditions.
DeePC handles sparse or limited traffic measurements better than many machine-learning methods.
Claims in the paper supported by experiments and methodological notes: use of Hankel structures and regularization in DeePC to operate with limited/sparse sensing; comparative statements versus generic ML methods (qualitative and simulation evidence).
The DeePC-based approach avoids the expensive, time-consuming model-building step required by model-based control methods.
Methodological argument and demonstration that controller uses historical input–output trajectories directly rather than requiring separate parametric model identification; supported by simulation implementation that bypasses model identification.
Modular strategy/execution architectures (like ESE) can materially improve the stability and efficiency of LLM-driven operational decision systems, increasing their attractiveness for deployment in retail, logistics, and supply-chain contexts.
Empirical improvements observed with ESE on RetailBench relative to monolithic baselines, coupled with analysis of deployment considerations and domain relevance discussed in the paper.
ESE improves operational stability and efficiency relative to baselines that do not separate strategy from execution.
Empirical comparisons reported in the experiments: eight contemporary LLMs evaluated on multiple RetailBench environments, with ESE compared against monolithic LLM agents and other baselines using metrics of operational stability (e.g., variance or frequency of catastrophic failures) and efficiency (e.g., cost/profit/fulfillment).
ESE enables interpretable and adaptive strategy updates intended to counteract error accumulation and environmental drift.
Design features of the strategy module (slower updates, interpretable strategy representation) and qualitative analysis in the paper linking these features to reduced error accumulation and strategy drift in experiments.
The model provides multi-mode reasoning: non-reasoning, Italian/English reasoning, and a 'turbo-reasoning' concise bullet-point mode intended for real‑time use cases.
Model functionality described by authors: the paper documents multiple operating modes including a concise 'turbo' mode for low-latency outputs. The summary lists these modes but does not provide quantitative latency/quality tradeoff metrics.
EngGPT2 uses far less training data (and, by implication, training compute) than some large models—reported as about 1/10–1/6 of the data used by larger dense models (e.g., vs. Qwen3 or Llama3).
Comparison of reported token counts: EngGPT2 at ~2.5T tokens vs. stated baselines (Qwen3 36T, Llama3 15T); authors assert training-data reduction in the 1/10–1/6 range. The paper reports token counts but does not provide matched compute/FLOP or training-time comparisons.
On benchmarks (MMLU-Pro, GSM8K, IFEval, HumanEval) EngGPT2 matches or is comparable to dense models in the 8B–16B parameter range.
Evaluation reported on the named benchmarks; the paper states comparable benchmark performance to dense 8B–16B models. The summary does not include exact scores, standard deviations, prompt engineering details, dataset overlap checks, or sample sizes per benchmark.
Model-merging and targeted continual pre-training were used to amplify limited compute and improve performance without full from-scratch pre-training.
Paper describes using model-merging and targeted continual pre-training to leverage existing strong weights and inject language/domain data efficiently.
Prioritizing data quality over raw scale (curated 120B tokens instead of maximizing token counts) produced better Arabic and cross-lingual performance for the resource budget used.
Paper emphasizes a 'data quality over brute-force scale' strategy and reports benchmark improvements from the curated corpus and targeted training; the causal link is asserted via these results.
Those benchmark gains were achieved using roughly 1/8th the pre-training tokens of Fanar 1.0 (i.e., about 8× fewer pre-training tokens).
Paper states the approach used approximately 1/8th the pre-training tokens of Fanar 1.0 while improving benchmarks; exact token counts for Fanar 1.0 not provided in the summary.
Fanar-27B reports benchmark gains relative to Fanar 1.0: Arabic knowledge +9.1 points, language ability +7.3 points, dialect handling +3.5 points, and English capability +7.6 points.
Paper reports these specific numeric benchmark improvements across Arabic knowledge, general language ability, dialects, and English capability; evaluation suite names, sample sizes, and statistical details are not specified in the summary.
Using entailment-based verifiers can reduce inference compute cost by over two orders of magnitude, lowering marginal compute cost per query compared to LLM-based scorers.
Measured FLOP comparisons between lightweight entailment models and LLM-based scoring in the paper, with reported >100× FLOP reduction.
Lightweight entailment-based verifiers match or exceed LLM-based confidence scorers for scoring atomic claims while consuming >100× fewer FLOPs.
Empirical comparisons in the paper between entailment (NLI) models and LLM-based scoring approaches across the evaluated datasets, with measured FLOPs showing more than two orders of magnitude lower compute for the entailment models alongside equal-or-better scoring performance.
Pretraining corpora must be broadened across temporal scales and domains (including high-frequency domains) to improve TSFM generalization.
Recommendation follows from observed poor transfer and fine-tuning results; paper argues for inclusion of high-frequency, domain-diverse data in pretraining. This is prescriptive and driven by the benchmarking observations rather than an experiment demonstrating improved outcomes after broadened pretraining.
FederatedFactory recovers centralized-model performance without pooling raw data or relying on a central dataset, thereby weakening dependence on foundation-model vendors and their pretrained priors.
Empirical claims that federated results match centralized upper bounds on tested datasets and methodological statement that no external pretrained priors are required; the economic interpretation is drawn from these empirical and methodological properties.
FederatedFactory enables exact modular unlearning: deterministic deletion of a client's generative module exactly removes that client's contribution to synthesized datasets.
Design claim in the paper: generative modules are modular assets, and deleting a module deterministically prevents its use when synthesizing the balanced dataset; paper asserts exact modular unlearning and reports it as a property of the method. (No formal auditing metrics or proofs provided in the summary.)
Downstream discriminative models trained on the synthesized, balanced datasets avoid conflicting optimization trajectories that cause collapse in standard federated learning under mutually exclusive labels.
Methodological reasoning (balanced synthesized training data removes label heterogeneity across clients) plus empirical demonstrations where standard FL collapses under mutual exclusivity (e.g., CIFAR baseline) and FederatedFactory recovers performance.
Across diverse medical imagery benchmarks (including MedMNIST and ISIC2019), FederatedFactory matches centralized upper-bound performance.
Empirical comparisons reported in the paper: FederatedFactory results are compared against a centralized upper bound on the same datasets and reported to be matched. (Details of which datasets and exact numeric comparisons beyond ISIC2019 are not enumerated in the summary.)
FederatedFactory restores ISIC2019 performance to AUROC = 90.57% under the tested regime.
Empirical experiment reported on ISIC2019 (dermatology images); paper reports AUROC value of 90.57% for FederatedFactory. (Exact train/test splits and client partitioning not specified in the summary.)
FederatedFactory operates without relying on external pretrained foundation models (zero-dependency).
Paper explicitly states the framework does not depend on pretrained foundation models; experiments are reported without using external pretraining (datasets: MedMNIST suite, ISIC2019, CIFAR-10).
By synthesizing class-balanced datasets locally from exchanged generative modules, FederatedFactory eliminates gradient conflict among clients' discriminative updates.
Mechanistic argument in the paper (training discriminative models on locally synthesized, balanced data avoids heterogeneity-induced conflicting gradients) supported by empirical recovery of performance in experiments where baselines collapse under label heterogeneity.
FederatedFactory reframes federated learning by exchanging generative modules (priors) instead of exchanging discriminative model weights.
Methodological description in the paper: design of FederatedFactory where each client trains/contributes generative modules (class-specific priors) and shares those modules rather than classifier weights. Evidence is the described protocol and experiments that implement that protocol on the reported datasets.
Practical recommendation: buyers and evaluators should demand contamination audits (triangulating lexical, paraphrase, and behavioral probes) and report both raw and contamination-adjusted scores, especially for high-stakes use.
Policy/recommendation section in paper motivated by experimental findings; recommended procedures follow the paper's triage methods (Experiments 1–3) applied to evaluations.
Triangulation across methods reduces false positives and false negatives inherent to any single contamination-detection approach.
Methodological claim supported by design: use of lexical matching, paraphrase diagnostics, and behavioral probes to complement one another and offset single-method blind spots (as reported in robustness section).
Estimated performance uplift from identified contamination ranges from +0.030 to +0.054 absolute accuracy points by category.
Experiment 1 translated contamination prevalence into estimated accuracy gains by simulating model behavior on known-exposed items (method described in paper; category-level simulations yield +0.030 to +0.054 point uplifts).
There is an economic case for funding access to quantum hardware, standardized benchmarking infrastructure, and shared datasets to reduce deployment uncertainty and enable credible claims of usefulness.
Policy and R&D recommendation inferred from the review's finding of heterogeneous benchmarking and missing hardware tests; argued as a mitigation to the identified deployment gap.
Most of the surveyed systems address semantic correctness (Layer 2) to some degree.
The review's application of Layer 2 found that a majority of the 13 systems include semantic-level evaluations (e.g., unitary equivalence tests, functional tests, simulator-based correctness checks), though the depth varied.
Across extensive simulations with realistic latency modeling, RARRL consistently yields higher task success, lower execution latency, and better robustness under varied resource budgets and task complexities.
Paper summarizes results from extensive experiments (including ablations and comparisons to baselines) claiming consistent improvements across varied budgets and task complexities; metrics reported include task success rate, execution latency, and robustness.