Evidence (7953 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
The MCP (Model Context Protocol) is widely adopted: >10,000 active MCP servers and 97 million monthly SDK downloads as of early 2026.
Reported protocol-adoption metrics in the paper (protocol adoption context); presumably aggregated server and SDK-download statistics (time-stamped to early 2026).
Agents learn from one another without curricula (agent-to-agent learning occurs organically in the ecosystem).
Naturalistic daily observations across platforms noting peer-to-peer agent interactions and apparent transfer of behaviors/knowledge; no controlled tests of learning or counterfactuals.
Agents form idea cascades and quality hierarchies without any centrally designed curriculum or intervention (emergent peer learning and spontaneous knowledge diffusion).
Observed interaction patterns across platforms showing cascades, hierarchies, and diffusion among agents in the qualitative dataset; documentation is comparative and observational rather than experimental.
A rapidly growing ecosystem of autonomous AI agents is producing organic, multi-agent learning dynamics that go beyond dyadic human–AI interactions.
Naturalistic, qualitative daily observations over one month across multiple agent platforms (reported platforms: Moltbook, The Colony, 4claw); coverage reported of >167,000 agents interacting as peers; comparative observational documentation rather than controlled experimentation.
Historical institutional publication records encode an extractable evaluative signal ("taste") that can be learned by models and used for scalable triage, screening, and curation of submissions.
Empirical results showing improved predictive accuracy after fine-tuning on accept/reject records, plus demonstration of transfer tasks and a cross-field (economics) result; implications for applications (triage, screening) are drawn from these empirical findings rather than directly deployed field experiments.
Models show well-calibrated confidence: their highest-confidence predictions are 100% accurate.
Calibration analysis of fine-tuned models comparing predicted-confidence levels to actual accuracy; reported that examples the model assigned its highest confidence to were 100% accurate. (Number of highest-confidence examples and calibration buckets not reported in the provided text.)
The learned evaluative signal transfers to untrained tasks such as pairwise comparisons and one-sentence summaries.
Fine-tuned models were evaluated on related, untrained evaluative tasks (pairwise comparisons of pitches and one-sentence summary evaluations) and showed positive transfer performance relative to baselines. (Specific metrics, effect sizes, and sample sizes for these transfer tasks are not provided in the supplied text.)
There is an economic rationale for disclosure mandates, certification of model properties (e.g., hallucination rates), and liability rules to internalize externalities from conversational AI.
Policy recommendation based on economic analysis of information asymmetries and externalities; no empirical testing of these policies in this paper.
Natural conversational interfaces lower search and transaction costs, increasing demand for AI services and expanding markets.
Economic reasoning and literature synthesis; the paper frames this as an implication rather than presenting empirical demand measurements.
Design interventions alone are necessary but not sufficient; institutional measures (standards, certification, liability rules) are also important to address harms and market failures.
Economic and policy analysis within the paper arguing for combined design and institutional responses; no empirical evidence demonstrating the comparative effectiveness of these measures.
Controls for personalization, data retention, opt-out, and escalation to human assistance are important interface affordances to mitigate risks in conversational AI.
Design heuristics and normative arguments from the paper and related literature; no empirical evaluation of these controls provided.
Real-time uncertainty/credibility signals and easy access to provenance (citations) should be provided to users to improve trust calibration.
Design recommendation grounded in literature review and suggested best practices; the paper recommends A/B tests and lab/field experiments as future work rather than reporting results.
Ethical front-end design—explicit disclosure of AI identity, capability limits, uncertainty cues, provenance, user controls, and escalation paths—can reduce harms and important market failures in AI-enabled interactions.
Normative and design-oriented recommendation supported by design heuristics and prior literature; no empirical trials reported showing quantified harm reduction.
Natural conversational style lowers friction and raises engagement and productivity.
Argument derived from literature synthesis and comparative analysis of conversational norms vs. human dialogue; no original empirical measurements reported in the paper.
SlideFormer generalizes beyond a single GPU vendor (the design achieves high utilization on both NVIDIA and AMD GPUs).
Reported experiments and utilization measurements on both NVIDIA (RTX 4090) and AMD GPUs showing sustained >95% peak performance, implying cross-vendor applicability. The summary does not specify which AMD models or the breadth of tested kernels.
Custom Triton kernels and advanced I/O integration remove key bottlenecks in single-GPU fine-tuning pipelines and contribute to the observed throughput gains.
Paper reports the use of custom Triton kernels for performance-critical primitives and improved I/O integration; throughput gains (1.40×–6.27×) are attributed in part to these optimizations. The summary does not isolate ablation results quantifying each optimization's contribution.
Heterogeneous memory management (multi-tier placement across GPU, CPU, and storage) materially reduces peak on-device memory requirements.
Authors describe an efficient memory layout and placement strategy across GPU, host RAM, and storage tiers and report lowered peak device memory use (≈2× reduction). The summary does not include low-level placement parameters or traces.
SlideFormer sustains >95% peak performance (high utilization) on both NVIDIA and AMD GPUs.
Reported sustained peak utilization measurements on experiments run on NVIDIA (e.g., RTX 4090) and AMD GPUs; the summary states >95% peak performance but does not give per-workload/utilization measurement methodology.
SlideFormer supports up to 8× larger batch sizes and up to 6× larger models on the same GPU relative to prior single-GPU baselines.
Reported comparisons to prior single-GPU baselines measuring achievable batch size and model-size capacity on the same GPU; exact baselines, workloads, and experimental configurations are not detailed in the summary.
SlideFormer reduces peak CPU and GPU memory usage by approximately 2× (roughly halving memory requirements).
Authors report peak memory measurements showing about a 2× reduction in both GPU and CPU memory compared to baselines; memory accounting method and baselines are not fully specified in the summary.
SlideFormer achieves 1.40×–6.27× higher throughput versus baseline systems.
Quantitative evaluation comparing throughput (reported as tokens/sec or updates/sec) against state-of-the-art single-GPU and multi-GPU fine-tuning pipelines (baselines are unnamed in the summary). Measurements reported across single-GPU experiments (hardware includes RTX 4090 and AMD GPUs).
SlideFormer enables fine-tuning very large LLMs (reported up to 123B+ parameters) on a single GPU (e.g., RTX 4090).
Authors report experiments and capability claims for single-GPU setups including an NVIDIA RTX 4090; model size stated as 123B+ in the paper summary. Details on exact model family, sequence length, or batch size used for the 123B+ claim are not enumerated in the summary.
Combining negative constraints with sparse preference signals yields better tradeoffs (safety plus helpfulness) than preference-only training.
Conceptual claim supported by qualitative comparisons and references to hybrid approaches in the literature (some constitutional/hybrid methods); the paper advocates this as a practical strategy and cites limited empirical indications.
Training primarily on negative constraints can reduce sycophancy and produce more stable adherence to rules compared to preference-only training.
Paper combines theoretical reasoning with cited empirical instances (e.g., constraint-based or constitutional methods) that report improved harmlessness/constraint adherence. The claim is stated as both theoretical expectation and supported by selected empirical reports rather than a comprehensive controlled comparison.
Negative constraints (explicit prohibitions or dispreferred labels) are often discrete, finitely specifiable, and independently verifiable, enabling models to converge to stable boundaries via falsification-style learning.
Theoretical/epistemological argument drawing on Popperian falsification and the paper's constructed structural model contrasting constraint and preference spaces. Empirical support is indirectly cited via methods like Constitutional AI that operationalize rule-like constraints.
Negative-only feedback (training on dispreferred or negative samples) can match or exceed preference-based RLHF (e.g., PPO/RLHF) on downstream tasks such as mathematical reasoning and harmlessness benchmarks.
Synthesis of recent empirical methods cited in the paper (examples named: Negative Sample Reinforcement, Distributional Dispreference Optimization, Constitutional AI) reporting parity or improvements versus PPO/RLHF on tasks like math reasoning and harmlessness. The paper aggregates published results rather than presenting a single new large-scale controlled experiment; specific sample sizes and exact experimental protocols vary by cited work and are not uniformly reported in the paper.
The core findings (harm from ToM order mismatches and benefits from A-ToM) are robust to partners beyond LLM-driven agents.
Paper reports robustness checks testing generalization to non-LLM agent classes (details summarized in robustness section); comparisons use the same coordination metrics.
A-ToM recovers coordination performance by aligning its effective ToM depth with partners across a range of multiagent tasks.
Experimental results showing A-ToM achieves coordination levels closer to matched fixed-order pairings across the repeated matrix game, grid navigation tasks, and Overcooked when facing partners with different fixed ToM depths.
An adaptive ToM (A-ToM) agent that infers its partner's ToM order from prior interactions and conditions its predictions and actions on that estimate restores alignment and improves coordination.
Implemented A-ToM (estimation from interaction history + conditioning of partner-action predictions) and evaluated it against fixed-order agents in the four environments; reported improvements in coordination metrics when A-ToM paired with partners of varying ToM orders.
Security testing included prompt-injection/adversarial inputs to probe the security agent and layered defenses.
Paper reports conducting prompt-injection/adversarial tests as part of security evaluation; the summary does not include the number, nature, or success/failure rates of these tests.
Rubric-based, structured scoring promotes consistent, auditable judgments and reduces subjective assessor bias.
System implements rubric-based, multi-dimensional scoring and the paper asserts this improves consistency and auditability; no reported inter-rater reliability statistics or controlled comparisons to human/monolithic baselines are provided in the summary.
Isolating sensitive logic (scoring rubrics, adaptive difficulty rules) from free-text generation reduces the attack surface.
Design principle implemented in the architecture (separation of concerns between agents); claimed benefit in the paper. Empirical validation details (quantitative reduction in successful attacks) are not provided in the summary.
CoMAI implements multi-layered defenses against prompt-injection and other prompt-level attacks via a dedicated security agent and constrained state transitions.
System design (a dedicated security/validation agent and a finite-state machine enforcing information flow) and reported security testing that included prompt-injection/adversarial inputs to probe defenses.
Candidate satisfaction with CoMAI was 84.41%.
Reported experimental metric in the paper summary; likely derived from post-interview surveys, but survey design, sample size, and response rates are not specified in the summary.
In experiments CoMAI achieved 83.33% recall.
Reported experimental metric in the paper summary; no information provided on how recall was computed (e.g., per-class vs. overall), sample sizes, or confidence intervals.
In experiments CoMAI achieved 90.47% accuracy.
Reported experimental metric in the paper summary. The underlying dataset size, class balance, and baseline comparison details are not provided in the summary.
CoMAI outperforms monolithic LLM-based assessments on robustness, fairness, and interpretability.
Comparative framing and reported experiments in the paper claiming improved robustness, fairness, and interpretability relative to single-agent LLM baselines; however, baseline specifics, dataset sizes, and statistical tests are not disclosed in the provided summary.
The clarification protocol elicits missing premises or confirms intent rather than producing an ill-aligned response.
Paper describes structured clarification templates (binary checks, multi-choice scaffolds, short clarifying questions) intended to elicit missing information; this is a design assertion without reported user-study evidence.
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.
Demand will shift toward AI systems that provide verifiable, contestable reasoning in regulated/high‑stakes sectors (healthcare, law, finance, public policy).
Economic argument and market prediction in the paper; speculative without market data or forecasting models presented.
This approach supports collaborative reasoning ('with' humans) rather than opaque automation 'for' humans, improving uptake in high‑stakes settings.
Conceptual argument about human-in-the-loop workflows and collaborative roles; no empirical uptake or deployment data presented.
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.