The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5539 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Adoption Remove filter
Ablation analyses show that each BATQuant component (block-wise transforms, orthogonality relaxation, GPK decomposition, block-wise clipping) contributes to robustness and efficiency.
Reported ablation studies isolating components and measuring their individual impact on performance and overhead in the paper's experiments (exact effect sizes and per-component numbers not given in the summary).
medium positive BATQuant: Outlier-resilient MXFP4 Quantization via Learnable... Task performance (accuracy/quality) and efficiency metrics (storage/runtime) wit...
Block-wise learnable clipping suppresses residual outliers locally and contributes to robustness under aggressive MXFP4 quantization.
Method description and ablation experiments in the paper showing incremental improvement when adding block-wise learnable clipping layers versus not using them; improvements measured on benchmark metrics post-quantization.
medium positive BATQuant: Outlier-resilient MXFP4 Quantization via Learnable... Residual outlier statistics and downstream task performance after applying learn...
Global and Private Kronecker (GPK) decomposition compresses transform parameters, keeping storage and runtime overhead low compared to dense per-block transforms.
Algorithmic contribution described in the paper with reported comparisons (storage/runtime overhead) versus dense per-block transform parameterizations; supported by experimental/implementation measurements (specific memory/runtime numbers not provided in the summary).
medium positive BATQuant: Outlier-resilient MXFP4 Quantization via Learnable... Storage footprint and runtime overhead of transform parameterization (memory and...
Relaxing orthogonality constraints on transforms (i.e., using non-strictly-orthogonal transforms) improves distribution shaping and better fits activations to the limited MXFP quantization range.
Design rationale and ablation studies reported in the paper showing that removing strict orthogonality yields better quantization fit and improved task metrics versus enforced orthogonal transforms.
medium positive BATQuant: Outlier-resilient MXFP4 Quantization via Learnable... Quantization fit (activation distribution shape) and resulting task accuracy/qua...
Aligning transforms to MXFP block granularity using block-wise affine transformations prevents cross-block outlier propagation and avoids the severe collapse seen with rotation-based integer quantization techniques.
Methodological design plus ablation/empirical results in the paper showing improved activation statistics and preserved model accuracy when using block-wise affine transforms aligned to MXFP blocks versus global rotations.
medium positive BATQuant: Outlier-resilient MXFP4 Quantization via Learnable... Activation distribution (outlier propagation) and downstream task performance / ...
Standardized runtime governance frameworks could lower per-deployment compliance engineering costs and increase diffusion of agentic systems.
Theoretical argument that standardization reduces transaction/engineering costs; suggested market dynamics; no empirical implementation evidence.
medium positive Runtime Governance for AI Agents: Policies on Paths per-deployment compliance cost and diffusion rate (adoption)
A market will develop for third-party governance tools, auditors, and insurers providing policy evaluators, risk calibration, and certification services.
Economic argument and analogy to existing markets (governance-as-a-service, insurance); no empirical evidence presented.
medium positive Runtime Governance for AI Agents: Policies on Paths emergence of third-party governance services (market development; presence/size ...
Benchmarking time-sensitivity (via V-DyKnow) can inform procurement decisions: buyers should assess models on their ability to handle temporally sensitive information, not just static benchmarks.
Paper's recommendations and implications section arguing for procurement practices informed by V-DyKnow evaluations.
medium positive V-DyKnow: A Dynamic Benchmark for Time-Sensitive Knowledge i... usefulness of benchmark for procurement decision criteria (qualitative)
The authors provide an operational inventory and conversation-analysis tool (the 28-code instrument) that can be reused for monitoring and mitigation by researchers, firms, and regulators.
Paper includes the codebook and describes its application as a re-usable monitoring/analysis instrument; proposed adoption discussed in implications.
medium positive Characterizing Delusional Spirals through Human-LLM Chat Log... availability and intended reusability of the 28-code inventory and analysis meth...
This is the first empirical, message-level study of verified chatbot-related psychological-harm cases (as opposed to speculative discussion).
Authors' positioning in paper; claim of novelty based on review of prior literature and their message-level, verified-case approach.
medium positive Characterizing Delusional Spirals through Human-LLM Chat Log... novelty / contribution described (message-level empirical analysis of verified h...
Empirical evaluation shows the new quasi‑Newton and trust‑region methods outperform baseline sequential methods and prior parallel Newton variants in a combination of speed, memory, stability, and convergence on the tested tasks.
Reported experiments comparing the proposed algorithms to sequential baselines and prior parallel Newton approaches on representative tasks (RNNs, MCMC); qualitative summary claims faster runtimes, lower memory, and improved stability.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... multi-metric performance: runtime, memory, stability, convergence on benchmark t...
Trust-region methods provide stability and improved convergence reliability across tested tasks.
Empirical comparisons and algorithmic analysis showing trust-region-enabled schemes had fewer divergences and more reliable convergence than prior parallel Newton variants in the evaluated workloads.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... stability (failure/divergence frequency) and convergence reliability in experime...
Quasi-Newton methods deliver faster runtimes and lower memory use in experiments on RNN inference/training and MCMC chains.
Empirical experiments comparing quasi-Newton implementations to full Newton and sequential baselines on representative tasks (explicit tasks listed: RNN inference/training and MCMC chains); reported qualitative outcomes indicate speed and memory advantages.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... wall-clock runtime and peak memory usage in experimental tasks
Trust-region variants substantially improve stability and robustness, addressing divergence issues of earlier parallel Newton implementations.
Presentation of trust-region schemes adapting step sizes within the parallel Newton framework; theoretical motivation and empirical results showing reduced divergence/failure rates compared to prior parallel Newton variants.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... stability metrics (divergence/failure rate), convergence reliability
Quasi-Newton variants are more computationally efficient and memory friendly than full Newton.
Complexity and memory analyses in the thesis plus empirical comparisons on representative tasks (RNNs, MCMC) showing lower runtime and memory usage for quasi-Newton implementations versus full Newton.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... wall-clock runtime and memory consumption
A Parallel Newton framework, implemented with a parallel associative scan, provides a natural way to parallelize computations across sequence length.
Algorithmic design combining Newton updates with a parallel associative-scan reduction; implementation details and experiments demonstrating the mechanics of the parallel scan across time steps.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... ability to perform Newton-style updates in parallel across time (scalability / r...
Parallel Newton methods can reliably and efficiently parallelize sequential dynamical systems (e.g., RNNs, MCMC) across sequence length when reframed as nonlinear equation solves.
Thesis presents a reformulation of sequence computation as a global nonlinear system, develops parallel Newton-style algorithms, and reports empirical experiments on representative tasks (RNN inference/training and MCMC chains) comparing runtime and convergence against sequential baselines and prior parallel Newton variants.
medium positive Unifying Optimization and Dynamics to Parallelize Sequential... parallelization speedup / runtime and convergence behavior across sequence lengt...
Adopting this approach shifts required skills and organizational roles away from lengthy parametric modeling toward data engineering, controller integration, and monitoring.
Authors' discussion of practical/organizational implications (qualitative); argument based on removal of model-building step and increased emphasis on data infrastructure and online operations.
medium positive Data-driven generalized perimeter control: Zürich case study changes in required skills/organizational roles (qualitative workforce compositi...
DeePC outperforms baseline controllers (e.g., fixed-time and standard adaptive schemes) in the simulated experiments.
Comparative simulation experiments reported in the paper where DeePC-controlled signals achieve superior system-level metrics relative to baseline controllers.
medium positive Data-driven generalized perimeter control: Zürich case study system-level outcomes (total travel time, CO2 emissions) compared across control...
The method was validated on a very large, high-fidelity microscopic closed-loop simulator of Zürich; the paper reports this as the largest such closed-loop urban-traffic simulation in the literature.
Authors' description of the experimental environment: city-scale microscopic simulator of Zürich with controller in the loop; explicit statement in the paper claiming it is the largest closed-loop urban-traffic simulation reported in the literature.
medium positive Data-driven generalized perimeter control: Zürich case study scale of validation (city-scale microscopic closed-loop simulation)
Regularization and the use of measured Hankel/data matrices make the method more robust to measurement noise and limited data.
Method description includes regularization terms in the DeePC optimization and use of Hankel matrices built from measured trajectories; simulation experiments show continued performance under noisy / limited-data conditions.
medium positive Data-driven generalized perimeter control: Zürich case study robustness to measurement noise and limited data (performance degradation metric...
DeePC handles sparse or limited traffic measurements better than many machine-learning methods.
Claims in the paper supported by experiments and methodological notes: use of Hankel structures and regularization in DeePC to operate with limited/sparse sensing; comparative statements versus generic ML methods (qualitative and simulation evidence).
medium positive Data-driven generalized perimeter control: Zürich case study controller performance (e.g., travel time, emissions) under sparse sensing / lim...
The DeePC-based approach avoids the expensive, time-consuming model-building step required by model-based control methods.
Methodological argument and demonstration that controller uses historical input–output trajectories directly rather than requiring separate parametric model identification; supported by simulation implementation that bypasses model identification.
medium positive Data-driven generalized perimeter control: Zürich case study need for explicit parametric model identification (development time/effort proxy...
Modular strategy/execution architectures (like ESE) can materially improve the stability and efficiency of LLM-driven operational decision systems, increasing their attractiveness for deployment in retail, logistics, and supply-chain contexts.
Empirical improvements observed with ESE on RetailBench relative to monolithic baselines, coupled with analysis of deployment considerations and domain relevance discussed in the paper.
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... operational stability and efficiency improvements as proxies for deployment attr...
ESE improves operational stability and efficiency relative to baselines that do not separate strategy from execution.
Empirical comparisons reported in the experiments: eight contemporary LLMs evaluated on multiple RetailBench environments, with ESE compared against monolithic LLM agents and other baselines using metrics of operational stability (e.g., variance or frequency of catastrophic failures) and efficiency (e.g., cost/profit/fulfillment).
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... operational stability (variance/frequency of catastrophic failures) and efficien...
ESE enables interpretable and adaptive strategy updates intended to counteract error accumulation and environmental drift.
Design features of the strategy module (slower updates, interpretable strategy representation) and qualitative analysis in the paper linking these features to reduced error accumulation and strategy drift in experiments.
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... interpretability of strategy updates and reduction in error accumulation/strateg...
The model provides multi-mode reasoning: non-reasoning, Italian/English reasoning, and a 'turbo-reasoning' concise bullet-point mode intended for real‑time use cases.
Model functionality described by authors: the paper documents multiple operating modes including a concise 'turbo' mode for low-latency outputs. The summary lists these modes but does not provide quantitative latency/quality tradeoff metrics.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence existence of distinct inference modes and their intended behavioral differences ...
EngGPT2 uses far less training data (and, by implication, training compute) than some large models—reported as about 1/10–1/6 of the data used by larger dense models (e.g., vs. Qwen3 or Llama3).
Comparison of reported token counts: EngGPT2 at ~2.5T tokens vs. stated baselines (Qwen3 36T, Llama3 15T); authors assert training-data reduction in the 1/10–1/6 range. The paper reports token counts but does not provide matched compute/FLOP or training-time comparisons.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence relative training-data volume (tokens) compared to named baseline models
On benchmarks (MMLU-Pro, GSM8K, IFEval, HumanEval) EngGPT2 matches or is comparable to dense models in the 8B–16B parameter range.
Evaluation reported on the named benchmarks; the paper states comparable benchmark performance to dense 8B–16B models. The summary does not include exact scores, standard deviations, prompt engineering details, dataset overlap checks, or sample sizes per benchmark.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence benchmark performance metrics (accuracy/score) on MMLU-Pro, GSM8K, IFEval, Human...
Model-merging and targeted continual pre-training were used to amplify limited compute and improve performance without full from-scratch pre-training.
Paper describes using model-merging and targeted continual pre-training to leverage existing strong weights and inject language/domain data efficiently.
medium positive Fanar 2.0: Arabic Generative AI Stack performance improvement attributable to model-merging/continual pre-training met...
Prioritizing data quality over raw scale (curated 120B tokens instead of maximizing token counts) produced better Arabic and cross-lingual performance for the resource budget used.
Paper emphasizes a 'data quality over brute-force scale' strategy and reports benchmark improvements from the curated corpus and targeted training; the causal link is asserted via these results.
medium positive Fanar 2.0: Arabic Generative AI Stack model performance relative to data curation strategy
Those benchmark gains were achieved using roughly 1/8th the pre-training tokens of Fanar 1.0 (i.e., about 8× fewer pre-training tokens).
Paper states the approach used approximately 1/8th the pre-training tokens of Fanar 1.0 while improving benchmarks; exact token counts for Fanar 1.0 not provided in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack relative pre-training token count (Fanar 2.0 vs Fanar 1.0)
Fanar-27B reports benchmark gains relative to Fanar 1.0: Arabic knowledge +9.1 points, language ability +7.3 points, dialect handling +3.5 points, and English capability +7.6 points.
Paper reports these specific numeric benchmark improvements across Arabic knowledge, general language ability, dialects, and English capability; evaluation suite names, sample sizes, and statistical details are not specified in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack benchmark scores (Arabic knowledge, language ability, dialect handling, English ...
Using entailment-based verifiers can reduce inference compute cost by over two orders of magnitude, lowering marginal compute cost per query compared to LLM-based scorers.
Measured FLOP comparisons between lightweight entailment models and LLM-based scoring in the paper, with reported >100× FLOP reduction.
medium positive Is Conformal Factuality for RAG-based LLMs Robust? Novel Met... compute cost (FLOPs) per verification/query
Lightweight entailment-based verifiers match or exceed LLM-based confidence scorers for scoring atomic claims while consuming >100× fewer FLOPs.
Empirical comparisons in the paper between entailment (NLI) models and LLM-based scoring approaches across the evaluated datasets, with measured FLOPs showing more than two orders of magnitude lower compute for the entailment models alongside equal-or-better scoring performance.
medium positive Is Conformal Factuality for RAG-based LLMs Robust? Novel Met... claim-scoring accuracy/performance and compute cost (FLOPs)
Pretraining corpora must be broadened across temporal scales and domains (including high-frequency domains) to improve TSFM generalization.
Recommendation follows from observed poor transfer and fine-tuning results; paper argues for inclusion of high-frequency, domain-diverse data in pretraining. This is prescriptive and driven by the benchmarking observations rather than an experiment demonstrating improved outcomes after broadened pretraining.
medium positive Bridging the High-Frequency Data Gap: A Millisecond-Resoluti... expected improvement in model generalization (forecasting performance) if pretra...
FederatedFactory recovers centralized-model performance without pooling raw data or relying on a central dataset, thereby weakening dependence on foundation-model vendors and their pretrained priors.
Empirical claims that federated results match centralized upper bounds on tested datasets and methodological statement that no external pretrained priors are required; the economic interpretation is drawn from these empirical and methodological properties.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... performance gap vs. centralized model; dependence on external pretrained priors
FederatedFactory enables exact modular unlearning: deterministic deletion of a client's generative module exactly removes that client's contribution to synthesized datasets.
Design claim in the paper: generative modules are modular assets, and deleting a module deterministically prevents its use when synthesizing the balanced dataset; paper asserts exact modular unlearning and reports it as a property of the method. (No formal auditing metrics or proofs provided in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unlearning correctness (module-level removal effect on synthesized dataset compo...
Downstream discriminative models trained on the synthesized, balanced datasets avoid conflicting optimization trajectories that cause collapse in standard federated learning under mutually exclusive labels.
Methodological reasoning (balanced synthesized training data removes label heterogeneity across clients) plus empirical demonstrations where standard FL collapses under mutual exclusivity (e.g., CIFAR baseline) and FederatedFactory recovers performance.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... optimization stability / avoidance of collapsed training (measured indirectly vi...
Across diverse medical imagery benchmarks (including MedMNIST and ISIC2019), FederatedFactory matches centralized upper-bound performance.
Empirical comparisons reported in the paper: FederatedFactory results are compared against a centralized upper bound on the same datasets and reported to be matched. (Details of which datasets and exact numeric comparisons beyond ISIC2019 are not enumerated in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... classification performance vs. centralized upper bound (accuracy/AUROC)
FederatedFactory restores ISIC2019 performance to AUROC = 90.57% under the tested regime.
Empirical experiment reported on ISIC2019 (dermatology images); paper reports AUROC value of 90.57% for FederatedFactory. (Exact train/test splits and client partitioning not specified in the summary.)
FederatedFactory operates without relying on external pretrained foundation models (zero-dependency).
Paper explicitly states the framework does not depend on pretrained foundation models; experiments are reported without using external pretraining (datasets: MedMNIST suite, ISIC2019, CIFAR-10).
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... dependency on pretrained models (binary: uses / does not use)
By synthesizing class-balanced datasets locally from exchanged generative modules, FederatedFactory eliminates gradient conflict among clients' discriminative updates.
Mechanistic argument in the paper (training discriminative models on locally synthesized, balanced data avoids heterogeneity-induced conflicting gradients) supported by empirical recovery of performance in experiments where baselines collapse under label heterogeneity.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... reduction/elimination of gradient conflict (inferred via improved downstream per...
FederatedFactory reframes federated learning by exchanging generative modules (priors) instead of exchanging discriminative model weights.
Methodological description in the paper: design of FederatedFactory where each client trains/contributes generative modules (class-specific priors) and shares those modules rather than classifier weights. Evidence is the described protocol and experiments that implement that protocol on the reported datasets.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unit of federation / protocol (generative modules vs. discriminative weights)
Practical recommendation: buyers and evaluators should demand contamination audits (triangulating lexical, paraphrase, and behavioral probes) and report both raw and contamination-adjusted scores, especially for high-stakes use.
Policy/recommendation section in paper motivated by experimental findings; recommended procedures follow the paper's triage methods (Experiments 1–3) applied to evaluations.
medium positive Are Large Language Models Truly Smarter Than Humans? improvement in evaluation reliability when contamination audits and adjusted rep...
Triangulation across methods reduces false positives and false negatives inherent to any single contamination-detection approach.
Methodological claim supported by design: use of lexical matching, paraphrase diagnostics, and behavioral probes to complement one another and offset single-method blind spots (as reported in robustness section).
medium positive Are Large Language Models Truly Smarter Than Humans? expected reduction in detection error (false positives/negatives) via multi-meth...
Estimated performance uplift from identified contamination ranges from +0.030 to +0.054 absolute accuracy points by category.
Experiment 1 translated contamination prevalence into estimated accuracy gains by simulating model behavior on known-exposed items (method described in paper; category-level simulations yield +0.030 to +0.054 point uplifts).
medium positive Are Large Language Models Truly Smarter Than Humans? estimated accuracy uplift (absolute accuracy points) attributable to contaminati...
There is an economic case for funding access to quantum hardware, standardized benchmarking infrastructure, and shared datasets to reduce deployment uncertainty and enable credible claims of usefulness.
Policy and R&D recommendation inferred from the review's finding of heterogeneous benchmarking and missing hardware tests; argued as a mitigation to the identified deployment gap.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... recommendation for funding/hardware access and standardized benchmarking
Most of the surveyed systems address semantic correctness (Layer 2) to some degree.
The review's application of Layer 2 found that a majority of the 13 systems include semantic-level evaluations (e.g., unitary equivalence tests, functional tests, simulator-based correctness checks), though the depth varied.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... presence and extent of semantic-correctness evaluation
Across extensive simulations with realistic latency modeling, RARRL consistently yields higher task success, lower execution latency, and better robustness under varied resource budgets and task complexities.
Paper summarizes results from extensive experiments (including ablations and comparisons to baselines) claiming consistent improvements across varied budgets and task complexities; metrics reported include task success rate, execution latency, and robustness.
medium positive When Should a Robot Think? Resource-Aware Reasoning via Rein... task success rate, execution latency, robustness under budget/task complexity va...