The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (7953 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
The project demonstrates that high-skill, knowledge-intensive tasks (formal mathematics) can be substantially automated with a heterogeneous AI toolchain, reducing human coding labor while retaining supervisory oversight.
Inference from project outcomes: AI tools produced formal Lean code and discharged lemmas while the reported human supervisor did not write code; single-project evidence (n=1), qualitative and quantitative logs support partial automation.
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... degree of automation in formal mathematics work (reduction in human coding effor...
The formalization finished prior to the final draft of the corresponding informal math paper.
Timing claim reported in the paper comparing formalization completion date to the final draft date of the related math paper (self-reported for the single project).
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... relative completion timing (formalization finished before final draft of math pa...
Effective practices included splitting proofs into abstract (high-level reasoning) and concrete (formalization) parts, having agents perform adversarial self-review, and targeting human review to key definitions and theorem statements.
Process-level recommendations drawn from the project's workflow; paper reports these practices as successful for this single development (n=1 project) based on qualitative assessment.
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... process practices associated with smoother formalization (binary presence/use of...
One mathematician supervised the process over approximately 10 days, reported a human cost of about $200, and wrote no code.
Self-reported human-role summary in the paper: single supervisor, ~10 days supervision time, reported monetary cost ≈ $200, and assertion that the human wrote no code (n=1 human supervisor for the project).
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... human supervision time (≈10 days), monetary supervision cost (≈$200), human codi...
Clear agent identity and provenance simplify liability attribution and enable markets for certified components, attestation services, and compliance tooling.
Legal/economic reasoning about traceability and liability plus systems design suggestions; no legal case analysis or market data presented.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... ease of liability attribution, size of markets for certification/attestation too...
Lifecycle service models (leasing, 'agent as a service', update/maintenance contracts) will become economically important to manage long‑lived physical assets with fast‑moving AI stacks.
Business model reasoning and analogy to service models in other capital‑intensive sectors; no market empirical study or business case analysis provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... prevalence and economic importance of lifecycle service models
Observability and attestation reduce uncertainty for insurers and regulators, lowering risk premia and insurance costs for agent deployments.
Argument from information economics/insurance theory and analogy to fields where observability reduces asymmetric information; no empirical insurance cost data or pilot programs reported.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... insurance premiums/risk premia; insurer uncertainty
Open interoperability standards and agent identities can lower entry barriers, increase competition, and accelerate complementary innovation.
Economic and policy reasoning referencing benefits of standards/open ecosystems; no empirical intervention or controlled comparison provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... entry barriers, competition intensity, rate of complementary innovation
Design choices will shape capital intensity and replacement cycles; architectures that support upgradeability and modularity lower expected upgrade costs and stranded‑asset risk.
Economic reasoning and analogy to modular design benefits in other industries; conceptual argument without empirical capital‑allocation data or simulations.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... expected upgrade cost, capital intensity, probability of stranded assets
Architectural components such as agentic identity and attestation, secure communication protocols, semantic layers and interchange formats, policy engines, and observability pipelines are necessary to enable safe, economic multi‑agent ecosystems.
Architectural blueprint proposed via conceptual systems design; justification by analogy to existing security/identity/semantic frameworks; no empirical testing reported.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... presence/implementation of architectural components and resulting ecosystem safe...
Design principles — modularity, clear agentic identity, secure agent‑to‑agent communication, policy‑governed runtimes, semantic interoperability, and observability/governance frameworks — will mitigate the architectural risks identified.
Normative systems design proposition grounded in systems engineering reasoning and historical lessons; no experimental validation or deployment studies provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... mitigation of interoperability, security, governance, and upgradeability risks
New capabilities (edge hardware, sensing, connectivity, and AI) now enable agents that not only sense/report but also perceive, reason, and act autonomously and cooperatively in real time.
Technological trend synthesis and systems reasoning; examples of mature edge hardware and advances in real‑time ML are used illustratively; no experimental validation provided.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... capability of agents for real‑time perception, reasoning, autonomous action, and...
Treating evolution, trust, and interoperability as first‑class requirements (rather than afterthoughts) is essential to avoid costly lock‑in, premature ossification, fragmentation, and negative externalities observed with IoT.
Normative prescription motivated by historical/comparative analysis of Internet and IoT (qualitative examples of fragmentation and lock‑in); no controlled study or quantitative validation presented.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... incidence of lock‑in, ossification, fragmentation, and negative externalities
The next phase of the Internet will be the "Internet of Physical AI Agents" — distributed, long-lived, embodied systems that perceive, reason, and act autonomously in real time.
Predictive/conceptual argument based on observed technological trends (advances in edge hardware, sensing, connectivity, and AI). Position paper with historical/comparative reasoning and illustrative examples; no primary empirical dataset or quantified projection.
medium positive The Internet of Physical AI Agents: Interoperability, Longev... emergence/adoption of embodied autonomous agent systems
Governance should be hybrid and structured: legal/regulatory frameworks (e.g., EU AI Act), technical standards (ISO safety norms), and crisis-management practices must be combined to allocate responsibilities and intervention authority.
Policy and standards synthesis drawing on EU AI Act, ISO standards, and crisis-management literature; prescriptive argument without empirical testing.
medium positive Resilience Meets Autonomy: Governing Embodied AI in Critical... degree to which governance arrangements allocate responsibility and intervention...
Robust resilience stems from 'bounded autonomy': constraining what an AI may decide and when humans must intervene.
Normative proposal grounded in synthesis of safety standards, crisis-management practices, and conceptual arguments; specification of autonomy dimensions (authority scope, temporal limits, performance envelopes, fail-safes).
medium positive Resilience Meets Autonomy: Governing Embodied AI in Critical... system resilience metrics (ability to avoid cascades, graceful degradation, cont...
Human–AI chat logs contain more explicit strategy commitments (stated rules) than human–human chats.
Content analysis / coding of natural-language chat logs from the human–AI experiment (human–AI n = 126) and the human–human benchmark (n = 108); coding counts show higher frequency of explicit commitments/statements of rules in human–AI messages.
medium positive Playing Against the Machine: Cooperation, Communication, and... frequency/count of explicit strategy-commitment messages in chat logs
Human–human subjects converge to Tit‑for‑Tat under one condition and to unconditional cooperation under the repeated-communication condition.
Strategy-estimation and behavioral trajectory analysis from the human–human benchmark (Dvorak & Fehrler 2024; n = 108) reported in the paper, showing condition-dependent convergence to Tit‑for‑Tat and to unconditional cooperation under repeated communication.
medium positive Playing Against the Machine: Cooperation, Communication, and... prevalent strategy type over time in human–human pairs (Tit‑for‑Tat vs unconditi...
Strategy estimation indicates human–AI subjects tend to favor Grim Trigger when allowed pre-play communication.
Strategy-estimation/classification applied to subjects' choices in the human–AI condition with pre-play chat (subset of the human–AI n = 126); inferred strategy prevalence shows elevated assignment to Grim Trigger-type rules.
medium positive Playing Against the Machine: Cooperation, Communication, and... prevalence/frequency of Grim Trigger strategy classification among subjects
Extensive simulation experiments across different network topologies and attacker/defense scenarios validate both the FJ modeling of LLM-MAS and the effectiveness of the trust-adaptive defense.
Multiple simulation studies reported in the paper that vary network density, trust matrices, attacker stubbornness/persuasiveness, and defense strategies; validation claims stem from consistent patterns observed across these simulated settings. (The summary does not list the number of experimental runs or statistical reporting.)
medium positive Don't Trust Stubborn Neighbors: A Security Framework for Age... agreement between model predictions and simulation outcomes; effectiveness metri...
A trust-adaptive defense that dynamically reduces trust in agents suspected of adversarial behavior can limit adversarial influence while preserving cooperative performance better than static trust-lowering strategies.
Implemented a trust-adaptive mechanism and evaluated it in simulation experiments across multiple network topologies and attack/defense scenarios, reporting reductions in adversarial sway with preserved task performance compared to naïve trust reduction. (Exact experimental counts and numeric effect sizes not provided in the summary.)
medium positive Don't Trust Stubborn Neighbors: A Security Framework for Age... reduction in adversarial influence and retention of cooperative task performance...
Increasing the number of benign agents dilutes an adversary's relative influence and thereby reduces the probability and magnitude of persuasion cascades.
Simulation experiments varying the count of benign agents in networks while measuring adversarial sway and collective opinion outcomes across different topologies. (Summary does not report exact counts or statistical summaries.)
medium positive Don't Trust Stubborn Neighbors: A Security Framework for Age... adversarial sway (magnitude of shift in collective opinion) and final consensus ...
The Friedkin–Johnsen opinion-dynamics model (innate opinions + interpersonal influence weights + stubbornness) closely captures LLM-MAS behavior across settings, both theoretically and empirically.
Modeling: derivation of FJ dynamics for LLM-MAS; Empirical: simulation experiments comparing FJ model predictions to observed LLM-MAS opinion trajectories and final consensus under varied topologies and trust matrices. (Exact goodness-of-fit metrics and sample counts not provided in the summary.)
medium positive Don't Trust Stubborn Neighbors: A Security Framework for Age... fit between model-predicted opinion trajectories/fixed points and simulated LLM-...
Open-source orchestration and evaluation harnesses plus a self-contained evaluation pipeline improve reproducibility for the Speedrunning Track.
Paper claims and documents the release of orchestration and evaluation code and describes the self-contained pipeline designed for deterministic reproducible evaluation.
medium positive The PokeAgent Challenge: Competitive and Long-Context Learni... reproducibility capability via released code and self-contained pipelines
Version 1.0 marks integration into operational workflows and establishes a base for future capabilities.
Authors report that v1.0 has been used in verification and mask-refinement loops for real datasets (MeerKAT, ASKAP, APERTIF); no detailed deployment metrics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... operational integration status of v1.0
Immersive inspection tools like iDaVIE are complements to automated ML pipelines by helping generate higher-quality labels and curated training examples.
Paper argues conceptual complementarity and cites iDaVIE's use for mask refinement and curated subcube export; no experimental comparison of label quality or downstream ML performance provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... label quality and availability of curated training examples
iDaVIE accelerates inspection-driven parts of astronomy workflows (e.g., mask refinement, verification).
Reported use cases where iDaVIE was used to refine masks and verify sources in real datasets; no measured time-per-task or throughput statistics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... inspection throughput (time per cube inspected; masks corrected per hour)
iDaVIE has already been integrated into real pipelines (MeerKAT, ASKAP, APERTIF) and used to improve quality control, refine detection masks, and identify new sources.
Author statement of integration and use cases citing verification of HI data cubes from MeerKAT, ASKAP and APERTIF; no quantitative deployment counts or independent validation provided in the text.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... integration into operational data-reduction/verification workflows; effects on Q...
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
medium positive Why AI systems don't learn and what to do about it: Lessons ... policy adoption; retraining program coverage; safety/regulatory frameworks imple...
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... benchmark performance on mixed observation-intervention tasks
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
medium positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency and performance in real-world embodied tasks
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... suitability and scalability of simulation platforms for architecture evaluation
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effectiveness of biologically inspired gating/plasticity mechanisms on learning ...
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
medium positive Why AI systems don't learn and what to do about it: Lessons ... learning progression speed; final competence given staged curricula
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effect of structural priors on learning speed and generalization
The taxonomy and measurement approach provide operational metrics to quantify empathic communication for economic analyses (productivity, customer satisfaction, retention).
Authors propose that their data-driven taxonomy and automated/coding measures can be used as metrics; the paper demonstrates derivation and use in trial outcomes but does not present direct economic outcome measurements.
medium positive Practicing with Language Models Cultivates Human Empathic Co... operational empathic communication metrics (taxonomy-derived measures)
LLM-generated responses frequently score as more empathic than human-written responses in blinded evaluations.
Blinded evaluations comparing LLM-generated replies to human-written replies using recipient/judge ratings of perceived empathy (reported in blinded tests described in paper). Exact blinded-test sample sizes not specified in the summary but derived from the study's evaluation procedures.
medium positive Practicing with Language Models Cultivates Human Empathic Co... blinded empathy judgments (perceived empathy ratings)
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... complementarity vs substitution of human labor (especially tacit-skill jobs)
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... ability to commoditize/replicate LLM capabilities via rule extraction
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... value capture by model providers (proprietary rents/economies of scale)
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... characterization of LLM competence as tacit/insight-like
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... economic value / capability of LLMs (degree of rule‑encodability vs tacitness)
The paper reports quantitative improvements (registration accuracy and reduced inter-object penetration) and demonstrates generalization gains of the multi-object approach on multiple datasets.
Cross-dataset experiments and quantitative metrics reported in the paper comparing MOD to baselines, showing improved registration and reduced penetration as well as transfer/generalization performance across datasets.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy; inter-object penetration; cross-dataset generalization pe...
The dataset and MOD produce far less inter-object penetration than prior datasets and single-object methods, with consistent improvements demonstrated across three benchmarks.
Reported empirical comparisons in the paper measuring inter-object penetration and showing substantially lower penetration for the proposed dataset+method relative to alternatives; experiments run on three benchmarks as stated in the paper.
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration metrics (e.g., penetration depth/volume, collision coun...
MOD consistently improves multi-object reconstruction quality across three datasets/benchmarks compared to state-of-the-art baselines.
Experimental results presented across three datasets/benchmarks showing consistent improvements of MOD over SOTA baselines on multi-object reconstruction metrics. (The summary does not list the names of the three benchmarks or the per-benchmark metrics/numbers.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... multi-object reconstruction quality (aggregate metrics used in paper across thre...
The MessyKitchens dataset and MOD together yield materially better registration accuracy than prior datasets and single-object methods.
Quantitative evaluations in paper report improved registration accuracy when using MessyKitchens and/or MOD relative to prior datasets and methods; comparisons performed across benchmarks. (Exact numeric gains and sample sizes not included in the provided summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... registration accuracy (pose alignment / object registration error metrics)
MOD (built on SAM 3D) produces fewer inter-object penetrations and more physically plausible object configurations than single-object monocular methods.
Empirical evaluation reported in paper comparing MOD against single-object baselines (including SAM 3D) on inter-object penetration metrics; results show reductions in measured penetrations. (Specific numeric reductions and dataset sizes are not provided in the supplied summary.)
medium positive MessyKitchens: Contact-rich object-level 3D scene reconstruc... inter-object penetration (penetration depth/volume or similar metric indicating ...
Distilling corrected decision trajectories into the model via supervised fine-tuning produces better recovery behavior than relying solely on reward signals or final-outcome optimization.
Comparative training setup where LEAFE uses supervised fine-tuning on corrected trajectories and is empirically compared to outcome-driven methods (e.g., GRPO) that optimize rewards; improved Pass@k reported.
medium positive Internalizing Agency from Reflective Experience Recovery behavior performance reflected in Pass@k (success rates) after training
LEAFE's gains occur across diverse interactive coding and agentic tasks with limited interaction budget.
Reported evaluation across a suite of long-horizon tasks (examples include multi-step coding problems and agentic tasks with rich feedback channels) with consistent improvements claimed.
medium positive Internalizing Agency from Reflective Experience Pass@k across multiple task types (interactive coding and agentic tasks)
LEAFE uses the same environmental interactions more effectively, improving sample efficiency under fixed interaction budgets.
Experimental regime with fixed interaction budgets demonstrating higher Pass@k for LEAFE relative to baselines given the same number of environment interactions; paper argues LEAFE converts richer feedback into targeted training signals rather than only final rewards.
medium positive Internalizing Agency from Reflective Experience Sample efficiency operationalized as Pass@k achieved under fixed interaction bud...