The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (3492 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Innovation Remove filter
iDaVIE accelerates inspection-driven parts of astronomy workflows (e.g., mask refinement, verification).
Reported use cases where iDaVIE was used to refine masks and verify sources in real datasets; no measured time-per-task or throughput statistics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... inspection throughput (time per cube inspected; masks corrected per hour)
iDaVIE has already been integrated into real pipelines (MeerKAT, ASKAP, APERTIF) and used to improve quality control, refine detection masks, and identify new sources.
Author statement of integration and use cases citing verification of HI data cubes from MeerKAT, ASKAP and APERTIF; no quantitative deployment counts or independent validation provided in the text.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... integration into operational data-reduction/verification workflows; effects on Q...
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
medium positive Why AI systems don't learn and what to do about it: Lessons ... policy adoption; retraining program coverage; safety/regulatory frameworks imple...
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... benchmark performance on mixed observation-intervention tasks
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
medium positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency and performance in real-world embodied tasks
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... suitability and scalability of simulation platforms for architecture evaluation
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effectiveness of biologically inspired gating/plasticity mechanisms on learning ...
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
medium positive Why AI systems don't learn and what to do about it: Lessons ... learning progression speed; final competence given staged curricula
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effect of structural priors on learning speed and generalization
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... complementarity vs substitution of human labor (especially tacit-skill jobs)
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... ability to commoditize/replicate LLM capabilities via rule extraction
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... value capture by model providers (proprietary rents/economies of scale)
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... characterization of LLM competence as tacit/insight-like
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... economic value / capability of LLMs (degree of rule‑encodability vs tacitness)
The paper reports quantitative improvements (registration accuracy and reduced inter-object penetration) and demonstrates generalization gains of the multi-object approach on multiple datasets.
Cross-dataset experiments and quantitative metrics reported in the paper comparing MOD to baselines, showing improved registration and reduced penetration as well as transfer/generalization performance across datasets.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy; inter-object penetration; cross-dataset generalization pe...
The dataset and MOD produce far less inter-object penetration than prior datasets and single-object methods, with consistent improvements demonstrated across three benchmarks.
Reported empirical comparisons in the paper measuring inter-object penetration and showing substantially lower penetration for the proposed dataset+method relative to alternatives; experiments run on three benchmarks as stated in the paper.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration metrics (e.g., penetration depth/volume, collision coun...
MOD consistently improves multi-object reconstruction quality across three datasets/benchmarks compared to state-of-the-art baselines.
Experimental results presented across three datasets/benchmarks showing consistent improvements of MOD over SOTA baselines on multi-object reconstruction metrics. (The summary does not list the names of the three benchmarks or the per-benchmark metrics/numbers.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... multi-object reconstruction quality (aggregate metrics used in paper across thre...
The MessyKitchens dataset and MOD together yield materially better registration accuracy than prior datasets and single-object methods.
Quantitative evaluations in paper report improved registration accuracy when using MessyKitchens and/or MOD relative to prior datasets and methods; comparisons performed across benchmarks. (Exact numeric gains and sample sizes not included in the provided summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy (pose alignment / object registration error metrics)
MOD (built on SAM 3D) produces fewer inter-object penetrations and more physically plausible object configurations than single-object monocular methods.
Empirical evaluation reported in paper comparing MOD against single-object baselines (including SAM 3D) on inter-object penetration metrics; results show reductions in measured penetrations. (Specific numeric reductions and dataset sizes are not provided in the supplied summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration (penetration depth/volume or similar metric indicating ...
Adoption will shift labor demand toward expertise in deterministic capture/replay tooling, trace analytics, and integration automation.
Economic/organizational implication discussed in the summary; no employment-data analysis provided—stated as an expected change in skill demand.
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... change in required engineering skill sets and labor demand
The approach improves utilization and ROI of expensive emulation/simulation resources by enabling reuse of deterministic traces across platforms.
Implication drawn from being able to replay identical traces on both simulator and emulator; no direct financial ROI calculation or utilization metrics provided in the summary.
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... emulation/simulation resource utilization and implied ROI (qualitative)
Using replay-driven validation markedly shortens integration and debug cycles for the demonstrated chiplet subsystem, enabling end-to-end system boot and workload execution within a single quarter.
Reported outcome for the ODIN SoC building block: authors state they were able to reach full system boot and run workloads within one quarter of integration using the methodology. (Single-case timeline reported; no control/comparison group or statistical analysis provided.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... integration cycle time (time to end-to-end boot and workload execution, measured...
Replay-driven validation made previously hard-to-reproduce interactions and bugs deterministic and repeatable at system level, enabling more focused and efficient debug.
Authors report that deterministic capture/replay converted non-deterministic protocol interactions and transient bugs into repeatable traces that could be inspected and debugged; examples include complex GPU workloads and protocol sequences reproduced end-to-end. (Qualitative/process-level evidence from the demonstrator; no numerical bug-count reduction provided.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... repeatability/determinism of intermittent interactions and bugs; debug focus/eff...
A replay-driven validation methodology using deterministic waveform capture and replay from a single design database enables reliable, repeatable system-level reproduction of complex GPU workloads and protocol sequences for tightly coupled CPU–GPU chiplet subsystems.
Applied to a demonstrator SoC building block (ODIN chiplet architecture) integrating a CPU subsystem, multiple Intel Xe GPU cores, and a configurable NoC; deterministic waveform capture during execution and deterministic replay of those waveforms across targets was performed; same design database used to manage captures, traces, and replay sessions. (No large-sample statistical evaluation reported; demonstration limited to the described system.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... system-level reproducibility of GPU workloads and inter-chiplet protocol sequenc...
Standardized runtime governance frameworks could lower per-deployment compliance engineering costs and increase diffusion of agentic systems.
Theoretical argument that standardization reduces transaction/engineering costs; suggested market dynamics; no empirical implementation evidence.
medium positive Runtime Governance for AI Agents: Policies on Paths per-deployment compliance cost and diffusion rate (adoption)
A market will develop for third-party governance tools, auditors, and insurers providing policy evaluators, risk calibration, and certification services.
Economic argument and analogy to existing markets (governance-as-a-service, insurance); no empirical evidence presented.
medium positive Runtime Governance for AI Agents: Policies on Paths emergence of third-party governance services (market development; presence/size ...
The authors synthesized complex three-port pixelated output combiners that extend efficiency over back-off using fully symmetrical device implementations.
Design novelty claimed in paper; resulting three-port pixelated combiner layouts were included in the optimization output and used in prototypes. Prototypes used symmetrical device implementations.
medium positive Deep Learning-Driven Black-Box Doherty Power Amplifier with ... combiner topology/layout complexity and achieved efficiency across back-off
The CNN EM surrogate enables orders-of-magnitude faster evaluations than full-wave EM simulation, enabling global search of the discrete pixel design space.
Authors state the surrogate provides orders-of-magnitude speedups compared to full-wave EM, enabling global search; no quantitative speedup numbers or benchmarking details are provided in the provided summary.
medium positive Deep Learning-Driven Black-Box Doherty Power Amplifier with ... evaluation time per candidate layout (surrogate inference time vs full-wave EM s...
A deep convolutional neural network (CNN) trained as an electromagnetic (EM) surrogate can predict S-parameters of pixelated passive networks quickly and with sufficient accuracy to be used inside an optimizer loop.
Paper reports development and use of a CNN surrogate mapping pixelated network layouts to S-parameters; the surrogate was embedded in the optimizer and used to evaluate candidate layouts during global search. (Note: exact training dataset size, architecture, and error metrics are not provided in the summary.)
medium positive Deep Learning-Driven Black-Box Doherty Power Amplifier with ... S-parameter prediction accuracy and inference runtime sufficient for optimizer u...
Adopting this approach shifts required skills and organizational roles away from lengthy parametric modeling toward data engineering, controller integration, and monitoring.
Authors' discussion of practical/organizational implications (qualitative); argument based on removal of model-building step and increased emphasis on data infrastructure and online operations.
medium positive Data-driven generalized perimeter control: Zürich case study changes in required skills/organizational roles (qualitative workforce compositi...
DeePC outperforms baseline controllers (e.g., fixed-time and standard adaptive schemes) in the simulated experiments.
Comparative simulation experiments reported in the paper where DeePC-controlled signals achieve superior system-level metrics relative to baseline controllers.
medium positive Data-driven generalized perimeter control: Zürich case study system-level outcomes (total travel time, CO2 emissions) compared across control...
The method was validated on a very large, high-fidelity microscopic closed-loop simulator of Zürich; the paper reports this as the largest such closed-loop urban-traffic simulation in the literature.
Authors' description of the experimental environment: city-scale microscopic simulator of Zürich with controller in the loop; explicit statement in the paper claiming it is the largest closed-loop urban-traffic simulation reported in the literature.
medium positive Data-driven generalized perimeter control: Zürich case study scale of validation (city-scale microscopic closed-loop simulation)
Regularization and the use of measured Hankel/data matrices make the method more robust to measurement noise and limited data.
Method description includes regularization terms in the DeePC optimization and use of Hankel matrices built from measured trajectories; simulation experiments show continued performance under noisy / limited-data conditions.
medium positive Data-driven generalized perimeter control: Zürich case study robustness to measurement noise and limited data (performance degradation metric...
DeePC handles sparse or limited traffic measurements better than many machine-learning methods.
Claims in the paper supported by experiments and methodological notes: use of Hankel structures and regularization in DeePC to operate with limited/sparse sensing; comparative statements versus generic ML methods (qualitative and simulation evidence).
medium positive Data-driven generalized perimeter control: Zürich case study controller performance (e.g., travel time, emissions) under sparse sensing / lim...
The DeePC-based approach avoids the expensive, time-consuming model-building step required by model-based control methods.
Methodological argument and demonstration that controller uses historical input–output trajectories directly rather than requiring separate parametric model identification; supported by simulation implementation that bypasses model identification.
medium positive Data-driven generalized perimeter control: Zürich case study need for explicit parametric model identification (development time/effort proxy...
The model provides multi-mode reasoning: non-reasoning, Italian/English reasoning, and a 'turbo-reasoning' concise bullet-point mode intended for real‑time use cases.
Model functionality described by authors: the paper documents multiple operating modes including a concise 'turbo' mode for low-latency outputs. The summary lists these modes but does not provide quantitative latency/quality tradeoff metrics.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence existence of distinct inference modes and their intended behavioral differences ...
EngGPT2 uses far less training data (and, by implication, training compute) than some large models—reported as about 1/10–1/6 of the data used by larger dense models (e.g., vs. Qwen3 or Llama3).
Comparison of reported token counts: EngGPT2 at ~2.5T tokens vs. stated baselines (Qwen3 36T, Llama3 15T); authors assert training-data reduction in the 1/10–1/6 range. The paper reports token counts but does not provide matched compute/FLOP or training-time comparisons.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence relative training-data volume (tokens) compared to named baseline models
On benchmarks (MMLU-Pro, GSM8K, IFEval, HumanEval) EngGPT2 matches or is comparable to dense models in the 8B–16B parameter range.
Evaluation reported on the named benchmarks; the paper states comparable benchmark performance to dense 8B–16B models. The summary does not include exact scores, standard deviations, prompt engineering details, dataset overlap checks, or sample sizes per benchmark.
medium positive EngGPT2: Sovereign, Efficient and Open Intelligence benchmark performance metrics (accuracy/score) on MMLU-Pro, GSM8K, IFEval, Human...
Model-merging and targeted continual pre-training were used to amplify limited compute and improve performance without full from-scratch pre-training.
Paper describes using model-merging and targeted continual pre-training to leverage existing strong weights and inject language/domain data efficiently.
medium positive Fanar 2.0: Arabic Generative AI Stack performance improvement attributable to model-merging/continual pre-training met...
Prioritizing data quality over raw scale (curated 120B tokens instead of maximizing token counts) produced better Arabic and cross-lingual performance for the resource budget used.
Paper emphasizes a 'data quality over brute-force scale' strategy and reports benchmark improvements from the curated corpus and targeted training; the causal link is asserted via these results.
medium positive Fanar 2.0: Arabic Generative AI Stack model performance relative to data curation strategy
Those benchmark gains were achieved using roughly 1/8th the pre-training tokens of Fanar 1.0 (i.e., about 8× fewer pre-training tokens).
Paper states the approach used approximately 1/8th the pre-training tokens of Fanar 1.0 while improving benchmarks; exact token counts for Fanar 1.0 not provided in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack relative pre-training token count (Fanar 2.0 vs Fanar 1.0)
Fanar-27B reports benchmark gains relative to Fanar 1.0: Arabic knowledge +9.1 points, language ability +7.3 points, dialect handling +3.5 points, and English capability +7.6 points.
Paper reports these specific numeric benchmark improvements across Arabic knowledge, general language ability, dialects, and English capability; evaluation suite names, sample sizes, and statistical details are not specified in the summary.
medium positive Fanar 2.0: Arabic Generative AI Stack benchmark scores (Arabic knowledge, language ability, dialect handling, English ...
Pretraining corpora must be broadened across temporal scales and domains (including high-frequency domains) to improve TSFM generalization.
Recommendation follows from observed poor transfer and fine-tuning results; paper argues for inclusion of high-frequency, domain-diverse data in pretraining. This is prescriptive and driven by the benchmarking observations rather than an experiment demonstrating improved outcomes after broadened pretraining.
medium positive Bridging the High-Frequency Data Gap: A Millisecond-Resoluti... expected improvement in model generalization (forecasting performance) if pretra...
FederatedFactory recovers centralized-model performance without pooling raw data or relying on a central dataset, thereby weakening dependence on foundation-model vendors and their pretrained priors.
Empirical claims that federated results match centralized upper bounds on tested datasets and methodological statement that no external pretrained priors are required; the economic interpretation is drawn from these empirical and methodological properties.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... performance gap vs. centralized model; dependence on external pretrained priors
FederatedFactory enables exact modular unlearning: deterministic deletion of a client's generative module exactly removes that client's contribution to synthesized datasets.
Design claim in the paper: generative modules are modular assets, and deleting a module deterministically prevents its use when synthesizing the balanced dataset; paper asserts exact modular unlearning and reports it as a property of the method. (No formal auditing metrics or proofs provided in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unlearning correctness (module-level removal effect on synthesized dataset compo...
Downstream discriminative models trained on the synthesized, balanced datasets avoid conflicting optimization trajectories that cause collapse in standard federated learning under mutually exclusive labels.
Methodological reasoning (balanced synthesized training data removes label heterogeneity across clients) plus empirical demonstrations where standard FL collapses under mutual exclusivity (e.g., CIFAR baseline) and FederatedFactory recovers performance.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... optimization stability / avoidance of collapsed training (measured indirectly vi...
Across diverse medical imagery benchmarks (including MedMNIST and ISIC2019), FederatedFactory matches centralized upper-bound performance.
Empirical comparisons reported in the paper: FederatedFactory results are compared against a centralized upper bound on the same datasets and reported to be matched. (Details of which datasets and exact numeric comparisons beyond ISIC2019 are not enumerated in the summary.)
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... classification performance vs. centralized upper bound (accuracy/AUROC)
FederatedFactory restores ISIC2019 performance to AUROC = 90.57% under the tested regime.
Empirical experiment reported on ISIC2019 (dermatology images); paper reports AUROC value of 90.57% for FederatedFactory. (Exact train/test splits and client partitioning not specified in the summary.)
FederatedFactory operates without relying on external pretrained foundation models (zero-dependency).
Paper explicitly states the framework does not depend on pretrained foundation models; experiments are reported without using external pretraining (datasets: MedMNIST suite, ISIC2019, CIFAR-10).
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... dependency on pretrained models (binary: uses / does not use)
By synthesizing class-balanced datasets locally from exchanged generative modules, FederatedFactory eliminates gradient conflict among clients' discriminative updates.
Mechanistic argument in the paper (training discriminative models on locally synthesized, balanced data avoids heterogeneity-induced conflicting gradients) supported by empirical recovery of performance in experiments where baselines collapse under label heterogeneity.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... reduction/elimination of gradient conflict (inferred via improved downstream per...