The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (3016 claims)

Adoption
5187 claims
Productivity
4472 claims
Governance
4082 claims
Human-AI Collaboration
3016 claims
Labor Markets
2450 claims
Org Design
2305 claims
Innovation
2290 claims
Skills & Training
1920 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 373 105 59 437 982
Governance & Regulation 366 172 114 55 717
Research Productivity 237 95 34 294 664
Organizational Efficiency 364 82 62 34 545
Technology Adoption Rate 290 115 66 27 502
Firm Productivity 274 33 68 10 390
AI Safety & Ethics 116 177 44 24 363
Output Quality 231 61 23 25 340
Market Structure 107 121 85 14 332
Decision Quality 158 68 33 17 279
Employment Level 70 32 74 8 186
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 88 31 38 9 166
Firm Revenue 96 34 22 152
Innovation Output 105 12 21 11 150
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 66 31 4 125
Task Allocation 68 8 28 6 110
Error Rate 42 47 6 95
Training Effectiveness 55 12 11 16 94
Worker Satisfaction 42 32 11 6 91
Task Completion Time 74 5 4 1 84
Team Performance 44 9 15 7 76
Wages & Compensation 38 13 19 4 74
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 15 9 5 47
Job Displacement 5 29 12 46
Developer Productivity 27 2 3 1 33
Social Protection 18 8 6 1 33
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 8 4 9 21
Clear
Human Ai Collab Remove filter
Isolating sensitive logic (scoring rubrics, adaptive difficulty rules) from free-text generation reduces the attack surface.
Design principle implemented in the architecture (separation of concerns between agents); claimed benefit in the paper. Empirical validation details (quantitative reduction in successful attacks) are not provided in the summary.
medium positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... attack surface for adversarial manipulation of scoring/adaptive rules
CoMAI implements multi-layered defenses against prompt-injection and other prompt-level attacks via a dedicated security agent and constrained state transitions.
System design (a dedicated security/validation agent and a finite-state machine enforcing information flow) and reported security testing that included prompt-injection/adversarial inputs to probe defenses.
medium positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... robustness to prompt-injection and prompt-level adversarial attacks
Candidate satisfaction with CoMAI was 84.41%.
Reported experimental metric in the paper summary; likely derived from post-interview surveys, but survey design, sample size, and response rates are not specified in the summary.
medium positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... candidate satisfaction (survey-based)
In experiments CoMAI achieved 83.33% recall.
Reported experimental metric in the paper summary; no information provided on how recall was computed (e.g., per-class vs. overall), sample sizes, or confidence intervals.
medium positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... recall (sensitivity) of target class(es)
In experiments CoMAI achieved 90.47% accuracy.
Reported experimental metric in the paper summary. The underlying dataset size, class balance, and baseline comparison details are not provided in the summary.
CoMAI outperforms monolithic LLM-based assessments on robustness, fairness, and interpretability.
Comparative framing and reported experiments in the paper claiming improved robustness, fairness, and interpretability relative to single-agent LLM baselines; however, baseline specifics, dataset sizes, and statistical tests are not disclosed in the provided summary.
medium positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... robustness; fairness (subjective bias reduction); interpretability/auditability
The clarification protocol elicits missing premises or confirms intent rather than producing an ill-aligned response.
Paper describes structured clarification templates (binary checks, multi-choice scaffolds, short clarifying questions) intended to elicit missing information; this is a design assertion without reported user-study evidence.
medium positive A Context Alignment Pre-processor for Enhancing the Coherenc... rate of resolved ambiguities after clarification / reduction in ill-aligned resp...
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... welfare indicators (decision quality gains, trust levels, social surplus) from a...
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... reduction in transaction/search/verification costs attributable to traceable AFs
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... price premium and competitive advantage metrics for transparent-AI providers
Demand will shift toward AI systems that provide verifiable, contestable reasoning in regulated/high‑stakes sectors (healthcare, law, finance, public policy).
Economic argument and market prediction in the paper; speculative without market data or forecasting models presented.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... market demand share for verifiable/contestable AI systems in regulated sectors
This approach supports collaborative reasoning ('with' humans) rather than opaque automation 'for' humans, improving uptake in high‑stakes settings.
Conceptual argument about human-in-the-loop workflows and collaborative roles; no empirical uptake or deployment data presented.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... human adoption/uplift in uptake for high-stakes decision systems
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... measures of robustness (resilience to error) and human trust in decisions
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... capacity for structured debate/revision (dialogue performance, acceptability out...
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... coherence and correctness of synthesized AFs and verifiability of derived infere...
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... accuracy/fidelity of extracted argument elements (claims, premises, relations, p...
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... trustworthiness/verifiability of AI outputs in high-stakes decision contexts
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
medium positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... degree of human-AI dialectical participation (ability to engage in contestable, ...
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.
medium positive Why Avoid Generative Legal AI Systems? Hallucination, Overre... market size and growth rates for verification/audit and related services
The project demonstrates that high-skill, knowledge-intensive tasks (formal mathematics) can be substantially automated with a heterogeneous AI toolchain, reducing human coding labor while retaining supervisory oversight.
Inference from project outcomes: AI tools produced formal Lean code and discharged lemmas while the reported human supervisor did not write code; single-project evidence (n=1), qualitative and quantitative logs support partial automation.
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... degree of automation in formal mathematics work (reduction in human coding effor...
The formalization finished prior to the final draft of the corresponding informal math paper.
Timing claim reported in the paper comparing formalization completion date to the final draft date of the related math paper (self-reported for the single project).
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... relative completion timing (formalization finished before final draft of math pa...
Effective practices included splitting proofs into abstract (high-level reasoning) and concrete (formalization) parts, having agents perform adversarial self-review, and targeting human review to key definitions and theorem statements.
Process-level recommendations drawn from the project's workflow; paper reports these practices as successful for this single development (n=1 project) based on qualitative assessment.
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... process practices associated with smoother formalization (binary presence/use of...
One mathematician supervised the process over approximately 10 days, reported a human cost of about $200, and wrote no code.
Self-reported human-role summary in the paper: single supervisor, ~10 days supervision time, reported monetary cost ≈ $200, and assertion that the human wrote no code (n=1 human supervisor for the project).
medium positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... human supervision time (≈10 days), monetary supervision cost (≈$200), human codi...
Governance should be hybrid and structured: legal/regulatory frameworks (e.g., EU AI Act), technical standards (ISO safety norms), and crisis-management practices must be combined to allocate responsibilities and intervention authority.
Policy and standards synthesis drawing on EU AI Act, ISO standards, and crisis-management literature; prescriptive argument without empirical testing.
medium positive Resilience Meets Autonomy: Governing Embodied AI in Critical... degree to which governance arrangements allocate responsibility and intervention...
Robust resilience stems from 'bounded autonomy': constraining what an AI may decide and when humans must intervene.
Normative proposal grounded in synthesis of safety standards, crisis-management practices, and conceptual arguments; specification of autonomy dimensions (authority scope, temporal limits, performance envelopes, fail-safes).
medium positive Resilience Meets Autonomy: Governing Embodied AI in Critical... system resilience metrics (ability to avoid cascades, graceful degradation, cont...
Human–AI chat logs contain more explicit strategy commitments (stated rules) than human–human chats.
Content analysis / coding of natural-language chat logs from the human–AI experiment (human–AI n = 126) and the human–human benchmark (n = 108); coding counts show higher frequency of explicit commitments/statements of rules in human–AI messages.
medium positive Playing Against the Machine: Cooperation, Communication, and... frequency/count of explicit strategy-commitment messages in chat logs
Human–human subjects converge to Tit‑for‑Tat under one condition and to unconditional cooperation under the repeated-communication condition.
Strategy-estimation and behavioral trajectory analysis from the human–human benchmark (Dvorak & Fehrler 2024; n = 108) reported in the paper, showing condition-dependent convergence to Tit‑for‑Tat and to unconditional cooperation under repeated communication.
medium positive Playing Against the Machine: Cooperation, Communication, and... prevalent strategy type over time in human–human pairs (Tit‑for‑Tat vs unconditi...
Strategy estimation indicates human–AI subjects tend to favor Grim Trigger when allowed pre-play communication.
Strategy-estimation/classification applied to subjects' choices in the human–AI condition with pre-play chat (subset of the human–AI n = 126); inferred strategy prevalence shows elevated assignment to Grim Trigger-type rules.
medium positive Playing Against the Machine: Cooperation, Communication, and... prevalence/frequency of Grim Trigger strategy classification among subjects
Version 1.0 marks integration into operational workflows and establishes a base for future capabilities.
Authors report that v1.0 has been used in verification and mask-refinement loops for real datasets (MeerKAT, ASKAP, APERTIF); no detailed deployment metrics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... operational integration status of v1.0
Immersive inspection tools like iDaVIE are complements to automated ML pipelines by helping generate higher-quality labels and curated training examples.
Paper argues conceptual complementarity and cites iDaVIE's use for mask refinement and curated subcube export; no experimental comparison of label quality or downstream ML performance provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... label quality and availability of curated training examples
iDaVIE accelerates inspection-driven parts of astronomy workflows (e.g., mask refinement, verification).
Reported use cases where iDaVIE was used to refine masks and verify sources in real datasets; no measured time-per-task or throughput statistics provided.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... inspection throughput (time per cube inspected; masks corrected per hour)
iDaVIE has already been integrated into real pipelines (MeerKAT, ASKAP, APERTIF) and used to improve quality control, refine detection masks, and identify new sources.
Author statement of integration and use cases citing verification of HI data cubes from MeerKAT, ASKAP and APERTIF; no quantitative deployment counts or independent validation provided in the text.
medium positive iDaVIE v1.0: A virtual reality tool for interactive analysis... integration into operational data-reduction/verification workflows; effects on Q...
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
medium positive Why AI systems don't learn and what to do about it: Lessons ... policy adoption; retraining program coverage; safety/regulatory frameworks imple...
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... benchmark performance on mixed observation-intervention tasks
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
medium positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency and performance in real-world embodied tasks
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... suitability and scalability of simulation platforms for architecture evaluation
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effectiveness of biologically inspired gating/plasticity mechanisms on learning ...
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
medium positive Why AI systems don't learn and what to do about it: Lessons ... learning progression speed; final competence given staged curricula
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
medium positive Why AI systems don't learn and what to do about it: Lessons ... effect of structural priors on learning speed and generalization
The taxonomy and measurement approach provide operational metrics to quantify empathic communication for economic analyses (productivity, customer satisfaction, retention).
Authors propose that their data-driven taxonomy and automated/coding measures can be used as metrics; the paper demonstrates derivation and use in trial outcomes but does not present direct economic outcome measurements.
medium positive Practicing with Language Models Cultivates Human Empathic Co... operational empathic communication metrics (taxonomy-derived measures)
LLM-generated responses frequently score as more empathic than human-written responses in blinded evaluations.
Blinded evaluations comparing LLM-generated replies to human-written replies using recipient/judge ratings of perceived empathy (reported in blinded tests described in paper). Exact blinded-test sample sizes not specified in the summary but derived from the study's evaluation procedures.
medium positive Practicing with Language Models Cultivates Human Empathic Co... blinded empathy judgments (perceived empathy ratings)
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... complementarity vs substitution of human labor (especially tacit-skill jobs)
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... ability to commoditize/replicate LLM capabilities via rule extraction
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... value capture by model providers (proprietary rents/economies of scale)
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... characterization of LLM competence as tacit/insight-like
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
medium positive Why the Valuable Capabilities of LLMs Are Precisely the Unex... economic value / capability of LLMs (degree of rule‑encodability vs tacitness)
Open dataset and code improve reproducibility and lower barriers for follow-up work on applied LLM tools and economic impact studies.
Release of SlideRL dataset (288 rollouts) and code repository; general statement about reproducibility benefits.
medium positive Learning to Present: Inverse Specification Rewards for Agent... Availability of artifacts that can be used to reproduce/extend the work
Parameter-efficient RL fine-tuning (0.5% of params) can yield large quality gains, implying a potentially high ROI for targeted fine-tuning versus full-model scaling.
Observed empirical gain of +33.1% for the tuned 7B over its untuned base and the 91.2% relative performance vs Claude Opus 4.6; implication drawn about cost-effectiveness of tuning few parameters rather than scaling model size.
medium positive Learning to Present: Inverse Specification Rewards for Agent... Quality gains after parameter-efficient fine-tuning and implied cost-effectivene...
The inverse-specification reward—where an LLM attempts to recover the original brief from generated slides—provides a holistic fidelity signal.
Reward design: inverse-specification component implemented and used as part of composite reward; claimed to measure fidelity via recovery accuracy.
medium positive Learning to Present: Inverse Specification Rewards for Agent... Accuracy of recovering original brief from generated slides (used as fidelity si...
Performance on this agentic slide-generation task is driven more by instruction adherence and tool-use compliance than by raw model parameter count.
Cross-model comparison across six models on the 48-task benchmark, with analyses showing instruction adherence and tool-use compliance better predict agent performance than parameter count.
medium positive Learning to Present: Inverse Specification Rewards for Agent... Predictive strength (correlation/importance) of instruction adherence and tool-u...