The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2215 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Innovation Remove filter
China's export competitiveness in digital services depends critically on participation in international rule‑making, stronger platform infrastructure, targeted support for firms going global, and improved data governance.
Synthesis of reviewed studies, institutional diagnosis, and comparative analysis (interpretive policy conclusion rather than empirically quantified effect sizes).
medium positive Analysis of Digital Services Trade and Export Competitivenes... China's digital services export competitiveness
Digital services have become a key indicator of a country's export competitiveness because they reshape global trade structure and labor specialization within global value chains.
Review of theoretical mechanisms and empirical literature in the integrative review; comparative policy analysis (qualitative synthesis rather than original quantification).
medium positive Analysis of Digital Services Trade and Export Competitivenes... export competitiveness; changes in trade structure and labor/task specialization
SlideFormer generalizes beyond a single GPU vendor (the design achieves high utilization on both NVIDIA and AMD GPUs).
Reported experiments and utilization measurements on both NVIDIA (RTX 4090) and AMD GPUs showing sustained >95% peak performance, implying cross-vendor applicability. The summary does not specify which AMD models or the breadth of tested kernels.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... sustained GPU utilization across different GPU vendors
Custom Triton kernels and advanced I/O integration remove key bottlenecks in single-GPU fine-tuning pipelines and contribute to the observed throughput gains.
Paper reports the use of custom Triton kernels for performance-critical primitives and improved I/O integration; throughput gains (1.40×–6.27×) are attributed in part to these optimizations. The summary does not isolate ablation results quantifying each optimization's contribution.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... throughput and end-to-end latency of fine-tuning pipeline
Heterogeneous memory management (multi-tier placement across GPU, CPU, and storage) materially reduces peak on-device memory requirements.
Authors describe an efficient memory layout and placement strategy across GPU, host RAM, and storage tiers and report lowered peak device memory use (≈2× reduction). The summary does not include low-level placement parameters or traces.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... peak on-device (GPU) memory usage and host memory usage
SlideFormer sustains >95% peak performance (high utilization) on both NVIDIA and AMD GPUs.
Reported sustained peak utilization measurements on experiments run on NVIDIA (e.g., RTX 4090) and AMD GPUs; the summary states >95% peak performance but does not give per-workload/utilization measurement methodology.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... sustained peak GPU utilization / percent of theoretical peak performance
SlideFormer supports up to 8× larger batch sizes and up to 6× larger models on the same GPU relative to prior single-GPU baselines.
Reported comparisons to prior single-GPU baselines measuring achievable batch size and model-size capacity on the same GPU; exact baselines, workloads, and experimental configurations are not detailed in the summary.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... achievable batch size and maximum model size on a given GPU
SlideFormer reduces peak CPU and GPU memory usage by approximately 2× (roughly halving memory requirements).
Authors report peak memory measurements showing about a 2× reduction in both GPU and CPU memory compared to baselines; memory accounting method and baselines are not fully specified in the summary.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... peak GPU memory usage and peak CPU (host) memory usage
SlideFormer achieves 1.40×–6.27× higher throughput versus baseline systems.
Quantitative evaluation comparing throughput (reported as tokens/sec or updates/sec) against state-of-the-art single-GPU and multi-GPU fine-tuning pipelines (baselines are unnamed in the summary). Measurements reported across single-GPU experiments (hardware includes RTX 4090 and AMD GPUs).
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... throughput (tokens/sec or updates/sec)
SlideFormer enables fine-tuning very large LLMs (reported up to 123B+ parameters) on a single GPU (e.g., RTX 4090).
Authors report experiments and capability claims for single-GPU setups including an NVIDIA RTX 4090; model size stated as 123B+ in the paper summary. Details on exact model family, sequence length, or batch size used for the 123B+ claim are not enumerated in the summary.
medium positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... maximum model size (parameters) that can be fine-tuned on a single GPU
Clear agent identity and provenance simplify liability attribution and enable markets for certified components, attestation services, and compliance tooling.
Legal/economic reasoning about traceability and liability plus systems design suggestions; no legal case analysis or market data presented.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... ease of liability attribution, size of markets for certification/attestation too...
Lifecycle service models (leasing, 'agent as a service', update/maintenance contracts) will become economically important to manage long‑lived physical assets with fast‑moving AI stacks.
Business model reasoning and analogy to service models in other capital‑intensive sectors; no market empirical study or business case analysis provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... prevalence and economic importance of lifecycle service models
Observability and attestation reduce uncertainty for insurers and regulators, lowering risk premia and insurance costs for agent deployments.
Argument from information economics/insurance theory and analogy to fields where observability reduces asymmetric information; no empirical insurance cost data or pilot programs reported.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... insurance premiums/risk premia; insurer uncertainty
Open interoperability standards and agent identities can lower entry barriers, increase competition, and accelerate complementary innovation.
Economic and policy reasoning referencing benefits of standards/open ecosystems; no empirical intervention or controlled comparison provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... entry barriers, competition intensity, rate of complementary innovation
Design choices will shape capital intensity and replacement cycles; architectures that support upgradeability and modularity lower expected upgrade costs and stranded‑asset risk.
Economic reasoning and analogy to modular design benefits in other industries; conceptual argument without empirical capital‑allocation data or simulations.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... expected upgrade cost, capital intensity, probability of stranded assets
Architectural components such as agentic identity and attestation, secure communication protocols, semantic layers and interchange formats, policy engines, and observability pipelines are necessary to enable safe, economic multi‑agent ecosystems.
Architectural blueprint proposed via conceptual systems design; justification by analogy to existing security/identity/semantic frameworks; no empirical testing reported.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... presence/implementation of architectural components and resulting ecosystem safe...
Design principles — modularity, clear agentic identity, secure agent‑to‑agent communication, policy‑governed runtimes, semantic interoperability, and observability/governance frameworks — will mitigate the architectural risks identified.
Normative systems design proposition grounded in systems engineering reasoning and historical lessons; no experimental validation or deployment studies provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... mitigation of interoperability, security, governance, and upgradeability risks
New capabilities (edge hardware, sensing, connectivity, and AI) now enable agents that not only sense/report but also perceive, reason, and act autonomously and cooperatively in real time.
Technological trend synthesis and systems reasoning; examples of mature edge hardware and advances in real‑time ML are used illustratively; no experimental validation provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... capability of agents for real‑time perception, reasoning, autonomous action, and...
Treating evolution, trust, and interoperability as first‑class requirements (rather than afterthoughts) is essential to avoid costly lock‑in, premature ossification, fragmentation, and negative externalities observed with IoT.
Normative prescription motivated by historical/comparative analysis of Internet and IoT (qualitative examples of fragmentation and lock‑in); no controlled study or quantitative validation presented.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... incidence of lock‑in, ossification, fragmentation, and negative externalities
The next phase of the Internet will be the "Internet of Physical AI Agents" — distributed, long-lived, embodied systems that perceive, reason, and act autonomously in real time.
Predictive/conceptual argument based on observed technological trends (advances in edge hardware, sensing, connectivity, and AI). Position paper with historical/comparative reasoning and illustrative examples; no primary empirical dataset or quantified projection.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... emergence/adoption of embodied autonomous agent systems
Open-source orchestration and evaluation harnesses plus a self-contained evaluation pipeline improve reproducibility for the Speedrunning Track.
Paper claims and documents the release of orchestration and evaluation code and describes the self-contained pipeline designed for deterministic reproducible evaluation.
medium positive The PokeAgent Challenge: Competitive and Long-Context Learni... reproducibility capability via released code and self-contained pipelines
Version 1.0 marks integration into operational workflows and establishes a base for future capabilities.
Authors report that v1.0 has been used in verification and mask-refinement loops for real datasets (MeerKAT, ASKAP, APERTIF); no detailed deployment metrics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... operational integration status of v1.0
Immersive inspection tools like iDaVIE are complements to automated ML pipelines by helping generate higher-quality labels and curated training examples.
Paper argues conceptual complementarity and cites iDaVIE's use for mask refinement and curated subcube export; no experimental comparison of label quality or downstream ML performance provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... label quality and availability of curated training examples
iDaVIE accelerates inspection-driven parts of astronomy workflows (e.g., mask refinement, verification).
Reported use cases where iDaVIE was used to refine masks and verify sources in real datasets; no measured time-per-task or throughput statistics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... inspection throughput (time per cube inspected; masks corrected per hour)
iDaVIE has already been integrated into real pipelines (MeerKAT, ASKAP, APERTIF) and used to improve quality control, refine detection masks, and identify new sources.
Author statement of integration and use cases citing verification of HI data cubes from MeerKAT, ASKAP and APERTIF; no quantitative deployment counts or independent validation provided in the text.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... integration into operational data-reduction/verification workflows; effects on Q...
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
medium positive Why AI systems don't learn and what to do about it: Lessons ... policy adoption; retraining program coverage; safety/regulatory frameworks imple...
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... benchmark performance on mixed observation-intervention tasks
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
medium positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency and performance in real-world embodied tasks
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... suitability and scalability of simulation platforms for architecture evaluation
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effectiveness of biologically inspired gating/plasticity mechanisms on learning ...
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
medium positive Why AI systems don't learn and what to do about it: Lessons ... learning progression speed; final competence given staged curricula
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effect of structural priors on learning speed and generalization
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... complementarity vs substitution of human labor (especially tacit-skill jobs)
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... ability to commoditize/replicate LLM capabilities via rule extraction
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... value capture by model providers (proprietary rents/economies of scale)
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... characterization of LLM competence as tacit/insight-like
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... economic value / capability of LLMs (degree of rule‑encodability vs tacitness)
The paper reports quantitative improvements (registration accuracy and reduced inter-object penetration) and demonstrates generalization gains of the multi-object approach on multiple datasets.
Cross-dataset experiments and quantitative metrics reported in the paper comparing MOD to baselines, showing improved registration and reduced penetration as well as transfer/generalization performance across datasets.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy; inter-object penetration; cross-dataset generalization pe...
The dataset and MOD produce far less inter-object penetration than prior datasets and single-object methods, with consistent improvements demonstrated across three benchmarks.
Reported empirical comparisons in the paper measuring inter-object penetration and showing substantially lower penetration for the proposed dataset+method relative to alternatives; experiments run on three benchmarks as stated in the paper.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration metrics (e.g., penetration depth/volume, collision coun...
MOD consistently improves multi-object reconstruction quality across three datasets/benchmarks compared to state-of-the-art baselines.
Experimental results presented across three datasets/benchmarks showing consistent improvements of MOD over SOTA baselines on multi-object reconstruction metrics. (The summary does not list the names of the three benchmarks or the per-benchmark metrics/numbers.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... multi-object reconstruction quality (aggregate metrics used in paper across thre...
The MessyKitchens dataset and MOD together yield materially better registration accuracy than prior datasets and single-object methods.
Quantitative evaluations in paper report improved registration accuracy when using MessyKitchens and/or MOD relative to prior datasets and methods; comparisons performed across benchmarks. (Exact numeric gains and sample sizes not included in the provided summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy (pose alignment / object registration error metrics)
MOD (built on SAM 3D) produces fewer inter-object penetrations and more physically plausible object configurations than single-object monocular methods.
Empirical evaluation reported in paper comparing MOD against single-object baselines (including SAM 3D) on inter-object penetration metrics; results show reductions in measured penetrations. (Specific numeric reductions and dataset sizes are not provided in the supplied summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration (penetration depth/volume or similar metric indicating ...
Adoption will shift labor demand toward expertise in deterministic capture/replay tooling, trace analytics, and integration automation.
Economic/organizational implication discussed in the summary; no employment-data analysis provided—stated as an expected change in skill demand.
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... change in required engineering skill sets and labor demand
The approach improves utilization and ROI of expensive emulation/simulation resources by enabling reuse of deterministic traces across platforms.
Implication drawn from being able to replay identical traces on both simulator and emulator; no direct financial ROI calculation or utilization metrics provided in the summary.
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... emulation/simulation resource utilization and implied ROI (qualitative)
Using replay-driven validation markedly shortens integration and debug cycles for the demonstrated chiplet subsystem, enabling end-to-end system boot and workload execution within a single quarter.
Reported outcome for the ODIN SoC building block: authors state they were able to reach full system boot and run workloads within one quarter of integration using the methodology. (Single-case timeline reported; no control/comparison group or statistical analysis provided.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... integration cycle time (time to end-to-end boot and workload execution, measured...
Replay-driven validation made previously hard-to-reproduce interactions and bugs deterministic and repeatable at system level, enabling more focused and efficient debug.
Authors report that deterministic capture/replay converted non-deterministic protocol interactions and transient bugs into repeatable traces that could be inspected and debugged; examples include complex GPU workloads and protocol sequences reproduced end-to-end. (Qualitative/process-level evidence from the demonstrator; no numerical bug-count reduction provided.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... repeatability/determinism of intermittent interactions and bugs; debug focus/eff...
A replay-driven validation methodology using deterministic waveform capture and replay from a single design database enables reliable, repeatable system-level reproduction of complex GPU workloads and protocol sequences for tightly coupled CPU–GPU chiplet subsystems.
Applied to a demonstrator SoC building block (ODIN chiplet architecture) integrating a CPU subsystem, multiple Intel Xe GPU cores, and a configurable NoC; deterministic waveform capture during execution and deterministic replay of those waveforms across targets was performed; same design database used to manage captures, traces, and replay sessions. (No large-sample statistical evaluation reported; demonstration limited to the described system.)
medium positive ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... system-level reproducibility of GPU workloads and inter-chiplet protocol sequenc...
Standardized runtime governance frameworks could lower per-deployment compliance engineering costs and increase diffusion of agentic systems.
Theoretical argument that standardization reduces transaction/engineering costs; suggested market dynamics; no empirical implementation evidence.
medium positive Runtime Governance for AI Agents: Policies on Paths per-deployment compliance cost and diffusion rate (adoption)
A market will develop for third-party governance tools, auditors, and insurers providing policy evaluators, risk calibration, and certification services.
Economic argument and analogy to existing markets (governance-as-a-service, insurance); no empirical evidence presented.
medium positive Runtime Governance for AI Agents: Policies on Paths emergence of third-party governance services (market development; presence/size ...
The authors synthesized complex three-port pixelated output combiners that extend efficiency over back-off using fully symmetrical device implementations.
Design novelty claimed in paper; resulting three-port pixelated combiner layouts were included in the optimization output and used in prototypes. Prototypes used symmetrical device implementations.
medium positive Deep Learning-Driven Black-Box Doherty Power Amplifier with ... combiner topology/layout complexity and achieved efficiency across back-off