The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (7953 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
The pattern improves legibility, procedural legitimacy, and actionability compared to systems without these elements (proposed as evaluation goals).
Evaluation agenda and proposed user-study metrics in the paper (legibility tests, perceived fairness surveys, contest effectiveness measures); no empirical results yet.
medium positive Designing for Disagreement: Front-End Guardrails for Assista... legibility (user comprehension), procedural legitimacy (perceived fairness), act...
Bounded calibration with contestability avoids opaque silent defaults that mask value choices and avoids wide-open user-configurable value sliders that offload moral choice under stress.
Normative rationale and argumentation in the paper; compared qualitatively against two alternative design approaches; no empirical comparison.
medium positive Designing for Disagreement: Front-End Guardrails for Assista... reduction in hidden value-skews and offloaded moral choice (qualitative assessme...
Bounded calibration with contestability is a viable design pattern for LLM-enabled robots that must allocate scarce, real-time assistance among multiple people.
Conceptual/design proposal in the paper; illustrated with a concrete public-concourse robot vignette; no empirical deployment or sample data reported.
medium positive Designing for Disagreement: Front-End Guardrails for Assista... feasibility/viability of the design pattern (qualitative)
Modular strategy/execution architectures (like ESE) can materially improve the stability and efficiency of LLM-driven operational decision systems, increasing their attractiveness for deployment in retail, logistics, and supply-chain contexts.
Empirical improvements observed with ESE on RetailBench relative to monolithic baselines, coupled with analysis of deployment considerations and domain relevance discussed in the paper.
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... operational stability and efficiency improvements as proxies for deployment attr...
ESE improves operational stability and efficiency relative to baselines that do not separate strategy from execution.
Empirical comparisons reported in the experiments: eight contemporary LLMs evaluated on multiple RetailBench environments, with ESE compared against monolithic LLM agents and other baselines using metrics of operational stability (e.g., variance or frequency of catastrophic failures) and efficiency (e.g., cost/profit/fulfillment).
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... operational stability (variance/frequency of catastrophic failures) and efficien...
ESE enables interpretable and adaptive strategy updates intended to counteract error accumulation and environmental drift.
Design features of the strategy module (slower updates, interpretable strategy representation) and qualitative analysis in the paper linking these features to reduced error accumulation and strategy drift in experiments.
medium positive RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... interpretability of strategy updates and reduction in error accumulation/strateg...
The model provides multi-mode reasoning: non-reasoning, Italian/English reasoning, and a 'turbo-reasoning' concise bullet-point mode intended for real‑time use cases.
Model functionality described by authors: the paper documents multiple operating modes including a concise 'turbo' mode for low-latency outputs. The summary lists these modes but does not provide quantitative latency/quality tradeoff metrics.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence existence of distinct inference modes and their intended behavioral differences ...
EngGPT2 uses far less training data (and, by implication, training compute) than some large models—reported as about 1/10–1/6 of the data used by larger dense models (e.g., vs. Qwen3 or Llama3).
Comparison of reported token counts: EngGPT2 at ~2.5T tokens vs. stated baselines (Qwen3 36T, Llama3 15T); authors assert training-data reduction in the 1/10–1/6 range. The paper reports token counts but does not provide matched compute/FLOP or training-time comparisons.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence relative training-data volume (tokens) compared to named baseline models
On benchmarks (MMLU-Pro, GSM8K, IFEval, HumanEval) EngGPT2 matches or is comparable to dense models in the 8B–16B parameter range.
Evaluation reported on the named benchmarks; the paper states comparable benchmark performance to dense 8B–16B models. The summary does not include exact scores, standard deviations, prompt engineering details, dataset overlap checks, or sample sizes per benchmark.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence benchmark performance metrics (accuracy/score) on MMLU-Pro, GSM8K, IFEval, Human...
Model-merging and targeted continual pre-training were used to amplify limited compute and improve performance without full from-scratch pre-training.
Paper describes using model-merging and targeted continual pre-training to leverage existing strong weights and inject language/domain data efficiently.
medium positive Fanar 2.0: Arabic Generative AI Stack performance improvement attributable to model-merging/continual pre-training met...
Prioritizing data quality over raw scale (curated 120B tokens instead of maximizing token counts) produced better Arabic and cross-lingual performance for the resource budget used.
Paper emphasizes a 'data quality over brute-force scale' strategy and reports benchmark improvements from the curated corpus and targeted training; the causal link is asserted via these results.
medium positive Fanar 2.0: Arabic Generative AI Stack model performance relative to data curation strategy
Those benchmark gains were achieved using roughly 1/8th the pre-training tokens of Fanar 1.0 (i.e., about 8× fewer pre-training tokens).
Paper states the approach used approximately 1/8th the pre-training tokens of Fanar 1.0 while improving benchmarks; exact token counts for Fanar 1.0 not provided in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack relative pre-training token count (Fanar 2.0 vs Fanar 1.0)
Fanar-27B reports benchmark gains relative to Fanar 1.0: Arabic knowledge +9.1 points, language ability +7.3 points, dialect handling +3.5 points, and English capability +7.6 points.
Paper reports these specific numeric benchmark improvements across Arabic knowledge, general language ability, dialects, and English capability; evaluation suite names, sample sizes, and statistical details are not specified in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack benchmark scores (Arabic knowledge, language ability, dialect handling, English ...
Using entailment-based verifiers can reduce inference compute cost by over two orders of magnitude, lowering marginal compute cost per query compared to LLM-based scorers.
Measured FLOP comparisons between lightweight entailment models and LLM-based scoring in the paper, with reported >100× FLOP reduction.
medium positive Is Conformal Factuality for RAG-based LLMs Robust? Novel Met... compute cost (FLOPs) per verification/query
Lightweight entailment-based verifiers match or exceed LLM-based confidence scorers for scoring atomic claims while consuming >100× fewer FLOPs.
Empirical comparisons in the paper between entailment (NLI) models and LLM-based scoring approaches across the evaluated datasets, with measured FLOPs showing more than two orders of magnitude lower compute for the entailment models alongside equal-or-better scoring performance.
medium positive Is Conformal Factuality for RAG-based LLMs Robust? Novel Met... claim-scoring accuracy/performance and compute cost (FLOPs)
Pretraining corpora must be broadened across temporal scales and domains (including high-frequency domains) to improve TSFM generalization.
Recommendation follows from observed poor transfer and fine-tuning results; paper argues for inclusion of high-frequency, domain-diverse data in pretraining. This is prescriptive and driven by the benchmarking observations rather than an experiment demonstrating improved outcomes after broadened pretraining.
medium positive Bridging the High-Frequency Data Gap: A Millisecond-Resoluti... expected improvement in model generalization (forecasting performance) if pretra...
FederatedFactory recovers centralized-model performance without pooling raw data or relying on a central dataset, thereby weakening dependence on foundation-model vendors and their pretrained priors.
Empirical claims that federated results match centralized upper bounds on tested datasets and methodological statement that no external pretrained priors are required; the economic interpretation is drawn from these empirical and methodological properties.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... performance gap vs. centralized model; dependence on external pretrained priors
FederatedFactory enables exact modular unlearning: deterministic deletion of a client's generative module exactly removes that client's contribution to synthesized datasets.
Design claim in the paper: generative modules are modular assets, and deleting a module deterministically prevents its use when synthesizing the balanced dataset; paper asserts exact modular unlearning and reports it as a property of the method. (No formal auditing metrics or proofs provided in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unlearning correctness (module-level removal effect on synthesized dataset compo...
Downstream discriminative models trained on the synthesized, balanced datasets avoid conflicting optimization trajectories that cause collapse in standard federated learning under mutually exclusive labels.
Methodological reasoning (balanced synthesized training data removes label heterogeneity across clients) plus empirical demonstrations where standard FL collapses under mutual exclusivity (e.g., CIFAR baseline) and FederatedFactory recovers performance.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... optimization stability / avoidance of collapsed training (measured indirectly vi...
Across diverse medical imagery benchmarks (including MedMNIST and ISIC2019), FederatedFactory matches centralized upper-bound performance.
Empirical comparisons reported in the paper: FederatedFactory results are compared against a centralized upper bound on the same datasets and reported to be matched. (Details of which datasets and exact numeric comparisons beyond ISIC2019 are not enumerated in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... classification performance vs. centralized upper bound (accuracy/AUROC)
FederatedFactory restores ISIC2019 performance to AUROC = 90.57% under the tested regime.
Empirical experiment reported on ISIC2019 (dermatology images); paper reports AUROC value of 90.57% for FederatedFactory. (Exact train/test splits and client partitioning not specified in the summary.)
FederatedFactory operates without relying on external pretrained foundation models (zero-dependency).
Paper explicitly states the framework does not depend on pretrained foundation models; experiments are reported without using external pretraining (datasets: MedMNIST suite, ISIC2019, CIFAR-10).
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... dependency on pretrained models (binary: uses / does not use)
By synthesizing class-balanced datasets locally from exchanged generative modules, FederatedFactory eliminates gradient conflict among clients' discriminative updates.
Mechanistic argument in the paper (training discriminative models on locally synthesized, balanced data avoids heterogeneity-induced conflicting gradients) supported by empirical recovery of performance in experiments where baselines collapse under label heterogeneity.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... reduction/elimination of gradient conflict (inferred via improved downstream per...
FederatedFactory reframes federated learning by exchanging generative modules (priors) instead of exchanging discriminative model weights.
Methodological description in the paper: design of FederatedFactory where each client trains/contributes generative modules (class-specific priors) and shares those modules rather than classifier weights. Evidence is the described protocol and experiments that implement that protocol on the reported datasets.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unit of federation / protocol (generative modules vs. discriminative weights)
Practical recommendation: buyers and evaluators should demand contamination audits (triangulating lexical, paraphrase, and behavioral probes) and report both raw and contamination-adjusted scores, especially for high-stakes use.
Policy/recommendation section in paper motivated by experimental findings; recommended procedures follow the paper's triage methods (Experiments 1–3) applied to evaluations.
medium positive Are Large Language Models Truly Smarter Than Humans? improvement in evaluation reliability when contamination audits and adjusted rep...
Triangulation across methods reduces false positives and false negatives inherent to any single contamination-detection approach.
Methodological claim supported by design: use of lexical matching, paraphrase diagnostics, and behavioral probes to complement one another and offset single-method blind spots (as reported in robustness section).
medium positive Are Large Language Models Truly Smarter Than Humans? expected reduction in detection error (false positives/negatives) via multi-meth...
Estimated performance uplift from identified contamination ranges from +0.030 to +0.054 absolute accuracy points by category.
Experiment 1 translated contamination prevalence into estimated accuracy gains by simulating model behavior on known-exposed items (method described in paper; category-level simulations yield +0.030 to +0.054 point uplifts).
medium positive Are Large Language Models Truly Smarter Than Humans? estimated accuracy uplift (absolute accuracy points) attributable to contaminati...
There is an economic case for funding access to quantum hardware, standardized benchmarking infrastructure, and shared datasets to reduce deployment uncertainty and enable credible claims of usefulness.
Policy and R&D recommendation inferred from the review's finding of heterogeneous benchmarking and missing hardware tests; argued as a mitigation to the identified deployment gap.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... recommendation for funding/hardware access and standardized benchmarking
Most of the surveyed systems address semantic correctness (Layer 2) to some degree.
The review's application of Layer 2 found that a majority of the 13 systems include semantic-level evaluations (e.g., unitary equivalence tests, functional tests, simulator-based correctness checks), though the depth varied.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... presence and extent of semantic-correctness evaluation
Across extensive simulations with realistic latency modeling, RARRL consistently yields higher task success, lower execution latency, and better robustness under varied resource budgets and task complexities.
Paper summarizes results from extensive experiments (including ablations and comparisons to baselines) claiming consistent improvements across varied budgets and task complexities; metrics reported include task success rate, execution latency, and robustness.
medium positive When Should a Robot Think? Resource-Aware Reasoning via Rein... task success rate, execution latency, robustness under budget/task complexity va...
RARRL increases robustness to resource constraints compared with fixed or heuristic policies (i.e., lower variance or better outcomes when compute/time budgets are constrained).
Paper reports robustness measures (variation in outcomes under constrained resources) and shows RARRL outperforming baselines and ablations across varied resource budgets in simulations with realistic latency modeling.
medium positive When Should a Robot Think? Resource-Aware Reasoning via Rein... robustness under constrained resources (e.g., outcome variance, success under bu...
RARRL reduces total execution latency compared with fixed or heuristic reasoning policies.
Experimental comparisons using ALFRED-derived latency profiles report that RARRL yields lower execution latency than baseline strategies; total execution latency is listed as a primary metric.
medium positive When Should a Robot Think? Resource-Aware Reasoning via Rein... total execution latency
RARRL improves task success rates compared with fixed or heuristic reasoning strategies in embodied robotic tasks (evaluated using ALFRED-derived latency profiles).
Empirical experiments reported in the paper compare RARRL to baselines (fixed strategies and heuristic triggers) using an embodied task suite based on ALFRED and empirical LLM latency profiles; results claimed to show higher task success across extensive experiments.
Policy instruments that can support shorter workweeks include tax incentives for firms that maintain pay while reducing hours, regulatory transition frameworks, and conditionality on AI subsidies or public procurement tied to job-preservation or reduced hours.
Policy-analytic argument drawing on standard policy toolkits and selected prior examples; no new policy pilot results presented.
medium positive A Shorter Workweek as a Policy Response to AI-Driven Labor D... adoption rate of shorter workweeks, preservation of pay, conditionality complian...
Shorter workweeks help sustain consumer purchasing power by reducing aggregate labor supply and thereby distributing automation gains more equitably.
Theoretical labour-supply reasoning plus historical case studies of work-time reductions; argumentual and normative rather than demonstrated with new macroeconomic empirical tests in AI-rich settings.
medium positive A Shorter Workweek as a Policy Response to AI-Driven Labor D... consumer purchasing power, distribution of productivity/earnings gains
A gradual, policy-driven reduction in the standard workweek can absorb labor displaced by automation, help maintain employment levels, and preserve wages per hour.
Synthesis of prior empirical findings on work-hour reductions and historical precedents (e.g., six-day to five-day transition); no new randomized or large-scale contemporary trials presented.
medium positive A Shorter Workweek as a Policy Response to AI-Driven Labor D... employment levels, hours worked per worker, hourly wages
Firms use layoffs strategically to signal efficiency and boost short-term stock prices, even when automation is not fully substitutive.
Organizational- and finance-literature synthesis on signaling and market reactions to cost-cutting; historical/case examples referenced rather than new econometric estimates.
medium positive A Shorter Workweek as a Policy Response to AI-Driven Labor D... short-term stock price/market reaction following layoffs; incidence of layoffs u...
Employers are increasingly demanding digital literacy, basic data competencies, and stronger communication and interpersonal skills.
Employer survey analysis tracking changes in required skills; descriptive summary of survey frequencies and employer-reported skill priorities. Survey sample size and representativeness not specified in summary.
medium positive The AI Transition: Assessing Vulnerability and Structural Re... frequency/intensity of employer-reported demand for specific skills (digital lit...
Some occupations experience efficiency and productivity gains where AI complements tasks, implying complementarity effects for those jobs.
Qualitative case studies of firms and employer survey reports documenting productivity/efficiency improvements in certain roles following AI adoption; descriptive analysis of sectoral/occupational outcomes. Quantitative magnitude not specified.
medium positive The AI Transition: Assessing Vulnerability and Structural Re... productivity or efficiency gains at job/occupation level (firm-reported producti...
Policymakers should prioritize retraining programs, strengthened social protection, and redistributive policies to mitigate automation-induced unemployment and inequality.
Policy recommendation based on the author's synthesis of risks and expert judgment; not based on an empirical intervention study in the paper.
medium positive DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... mitigation of technological unemployment and inequality (employment rates, incom...
There has been progress in software import substitution, contributing to partial technological sovereignty in Russia.
Use of statistics on software import substitution (authors reference national statistics but do not report detailed numbers or methodology).
medium positive DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... software import substitution rate / domestic share of software supply
Digitalization enables management optimization (improved management processes and decision-making) in Russian enterprises and public administration.
Qualitative analysis of policy documents and expert assessment by the author; no empirical evaluation or quantified effect sizes provided.
medium positive DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... management efficiency/optimization (process improvements, decision-making qualit...
Digitalization has produced measurable labor productivity growth in segments of the Russian economy.
Author's interpretation drawing on national statistics and strategic documents; statistical details (period, sectors, sample sizes) not specified in the paper.
medium positive DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... labor productivity (aggregate or sectoral productivity indicators)
Policy implication: prioritize large-scale, targeted reskilling and lifelong learning programs to enable workforce adaptability and capture AI complementarity gains.
Policy recommendations derived from the paper's findings (association between AI adoption and skill shifts, heterogeneous sectoral impacts) and the literature synthesis that links reskilling interventions to better labor outcomes; recommendation is prescriptive rather than empirically tested within the study.
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Policy effect is recommended but not empirically measured in the study (intended...
The paper provides empirical support for the complementarity hypothesis: AI tends to reconfigure jobs and create hybrid roles rather than eliminate employment wholesale.
Convergence of simulated sectoral employment patterns (some sectors showing net gains and hybrid-role growth), the strong correlation between AI adoption and skill shifts (r = 0.71), and corroborating studies from the literature synthesis emphasizing augmentation and hybridization mechanisms.
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Employment change and hybrid job share (evidence for complementarity vs. substit...
Institutional reskilling programs and governance frameworks markedly moderate labor-market outcomes: better frameworks correlate with more complementarities and lower net job loss.
Integration of literature-derived mechanisms with simulated empirical patterns; paper reports correlations/moderation-style comparisons across simulated sector-year cases incorporating policy/institutional variables (described in methods), supported by studies in the systematic review linking policy interventions to labor outcomes.
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Net employment change; measures of complementarity (e.g., hybrid share) conditio...
Healthcare and IT Services experienced net employment gains consistent with AI complementarity (augmented tasks and creation of new hybrid roles).
Simulated sectoral employment trends and net-change metrics for Healthcare and IT Services (2020–2024) presented in the paper, supported by literature synthesis examples showing human–AI complementarities in these sectors.
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Employment levels and net change by sector (Healthcare, IT Services)
The largest rises in hybrid jobs occurred in IT Services and Healthcare.
Sectoral decomposition of hybrid job share trends in the simulated dataset across the seven industries (2020–2024) and supporting qualitative/quantitative findings from the literature synthesis focused on IT Services and Healthcare.
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Hybrid job share by sector (IT Services, Healthcare)
Hybrid human–AI jobs increased substantially across all seven analyzed sectors between 2020 and 2024.
Descriptive trend analysis of the simulated dataset's hybrid job share metric (fraction of roles reclassified as human–AI hybrid) for the seven industries over 2020–2024, combined with corroborating examples from the literature synthesis (selected ACM/IEEE/Springer studies 2020–2024).
medium positive AI-Driven Transformation of Labor Markets: Skill Shifts, Hyb... Hybrid job share (sector-level, 2020–2024)
A matching/ranking algorithm that scores candidate-job pairs by skill fit and predicted remuneration (and proximity) improves the alignment of workers to short-term gigs.
System incorporates a ranking algorithm combining inferred-skill fit, predicted wages, and proximity constraints; pilot comparison reported improved matches, but quantitative algorithmic performance metrics are not provided in the summary.
medium positive AI-Driven Skill Mapping and Gig Economy Matching Algorithm f... match alignment/fit metrics; placement rates