Evidence (5586 claims)
Adoption
5586 claims
Productivity
4857 claims
Governance
4381 claims
Human-AI Collaboration
3417 claims
Labor Markets
2685 claims
Innovation
2581 claims
Org Design
2499 claims
Skills & Training
2031 claims
Inequality
1382 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 417 | 113 | 67 | 480 | 1091 |
| Governance & Regulation | 419 | 202 | 124 | 64 | 823 |
| Research Productivity | 261 | 100 | 34 | 303 | 703 |
| Organizational Efficiency | 406 | 96 | 71 | 40 | 616 |
| Technology Adoption Rate | 323 | 128 | 74 | 38 | 568 |
| Firm Productivity | 307 | 38 | 70 | 12 | 432 |
| Output Quality | 260 | 71 | 27 | 29 | 387 |
| AI Safety & Ethics | 118 | 179 | 45 | 24 | 368 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 75 | 37 | 19 | 312 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 74 | 34 | 78 | 9 | 197 |
| Skill Acquisition | 98 | 36 | 40 | 9 | 183 |
| Innovation Output | 121 | 12 | 24 | 13 | 171 |
| Firm Revenue | 98 | 35 | 24 | — | 157 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 87 | 16 | 34 | 7 | 144 |
| Inequality Measures | 25 | 76 | 32 | 5 | 138 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 89 | 7 | 4 | 3 | 103 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 33 | 11 | 7 | 98 |
| Wages & Compensation | 54 | 15 | 20 | 5 | 94 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 27 | 26 | 10 | 6 | 72 |
| Job Displacement | 6 | 39 | 13 | — | 58 |
| Hiring & Recruitment | 40 | 4 | 6 | 3 | 53 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 11 | 6 | 2 | 41 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 6 | 9 | — | 27 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
The probe's discriminating power scales with system capability — it becomes more discriminating as models get stronger.
Observed increased discrimination in stronger models using a 'ceiling discrimination' probe and independent judges (Gemini Pro, Copilot Pro); comparisons across 13 systems and ceiling runs indicate the instrument revealed subtler failures in higher-capability systems.
Adoption of AI feedback could lower marginal costs of delivering high-quality feedback and change fixed vs. variable cost structures for instruction delivery.
Economic implication discussed by workshop participants (50 scholars) as a theoretical possibility; no quantitative cost estimates in the report.
Generative AI can enable new feedback modalities (text, hints, worked examples, formative prompts) adaptable to content and learner needs.
Thematic conclusions from the interdisciplinary meeting of 50 scholars, describing possible modality generation capabilities of current generative models; no empirical modality-comparison data provided.
Immediate AI-generated feedback may sustain learner momentum and improve formative assessment cycles (timeliness & engagement).
Expert-opinion synthesis from structured workshop (50 scholars) identifying timely feedback as a potential pedagogical benefit; no empirical trials reported.
Large language and generative models can tailor explanations, scaffolding, and practice to learners' current states and preferences (personalization).
Workshop expert consensus and thematic synthesis from 50 interdisciplinary scholars; illustrative examples discussed rather than empirical evaluation.
Generative AI can produce real-time, individualized feedback at scale, potentially reducing per-student feedback costs and increasing feedback frequency.
Synthesis of expert perspectives from an interdisciplinary workshop of 50 scholars (educational psychology, computer science, learning sciences); qualitative small-group activities and thematic extraction. No primary experimental or quantitative cost data presented.
SERF (Structured Error Recovery Framework) defines structured, machine-readable failure semantics to enable deterministic agent self-correction and automated recovery strategies.
Design and formal specification of SERF in the paper; formalized as a testable hypothesis with reproducible experimental methodology.
ATBA (Adaptive Timeout Budget Allocation) frames sequential tool invocation as a budget-allocation problem over heterogeneous latency distributions to improve end-to-end latency and reliability.
Algorithmic formulation and formalization provided in the paper; ATBA is presented as a testable hypothesis with reproducible benchmarks and latency/error models.
The MCP (Model Context Protocol) is widely adopted: >10,000 active MCP servers and 97 million monthly SDK downloads as of early 2026.
Reported protocol-adoption metrics in the paper (protocol adoption context); presumably aggregated server and SDK-download statistics (time-stamped to early 2026).
Agents learn from one another without curricula (agent-to-agent learning occurs organically in the ecosystem).
Naturalistic daily observations across platforms noting peer-to-peer agent interactions and apparent transfer of behaviors/knowledge; no controlled tests of learning or counterfactuals.
Agents form idea cascades and quality hierarchies without any centrally designed curriculum or intervention (emergent peer learning and spontaneous knowledge diffusion).
Observed interaction patterns across platforms showing cascades, hierarchies, and diffusion among agents in the qualitative dataset; documentation is comparative and observational rather than experimental.
A rapidly growing ecosystem of autonomous AI agents is producing organic, multi-agent learning dynamics that go beyond dyadic human–AI interactions.
Naturalistic, qualitative daily observations over one month across multiple agent platforms (reported platforms: Moltbook, The Colony, 4claw); coverage reported of >167,000 agents interacting as peers; comparative observational documentation rather than controlled experimentation.
There is an economic rationale for disclosure mandates, certification of model properties (e.g., hallucination rates), and liability rules to internalize externalities from conversational AI.
Policy recommendation based on economic analysis of information asymmetries and externalities; no empirical testing of these policies in this paper.
Natural conversational interfaces lower search and transaction costs, increasing demand for AI services and expanding markets.
Economic reasoning and literature synthesis; the paper frames this as an implication rather than presenting empirical demand measurements.
Design interventions alone are necessary but not sufficient; institutional measures (standards, certification, liability rules) are also important to address harms and market failures.
Economic and policy analysis within the paper arguing for combined design and institutional responses; no empirical evidence demonstrating the comparative effectiveness of these measures.
Controls for personalization, data retention, opt-out, and escalation to human assistance are important interface affordances to mitigate risks in conversational AI.
Design heuristics and normative arguments from the paper and related literature; no empirical evaluation of these controls provided.
Real-time uncertainty/credibility signals and easy access to provenance (citations) should be provided to users to improve trust calibration.
Design recommendation grounded in literature review and suggested best practices; the paper recommends A/B tests and lab/field experiments as future work rather than reporting results.
Ethical front-end design—explicit disclosure of AI identity, capability limits, uncertainty cues, provenance, user controls, and escalation paths—can reduce harms and important market failures in AI-enabled interactions.
Normative and design-oriented recommendation supported by design heuristics and prior literature; no empirical trials reported showing quantified harm reduction.
Natural conversational style lowers friction and raises engagement and productivity.
Argument derived from literature synthesis and comparative analysis of conversational norms vs. human dialogue; no original empirical measurements reported in the paper.
SlideFormer generalizes beyond a single GPU vendor (the design achieves high utilization on both NVIDIA and AMD GPUs).
Reported experiments and utilization measurements on both NVIDIA (RTX 4090) and AMD GPUs showing sustained >95% peak performance, implying cross-vendor applicability. The summary does not specify which AMD models or the breadth of tested kernels.
Custom Triton kernels and advanced I/O integration remove key bottlenecks in single-GPU fine-tuning pipelines and contribute to the observed throughput gains.
Paper reports the use of custom Triton kernels for performance-critical primitives and improved I/O integration; throughput gains (1.40×–6.27×) are attributed in part to these optimizations. The summary does not isolate ablation results quantifying each optimization's contribution.
Heterogeneous memory management (multi-tier placement across GPU, CPU, and storage) materially reduces peak on-device memory requirements.
Authors describe an efficient memory layout and placement strategy across GPU, host RAM, and storage tiers and report lowered peak device memory use (≈2× reduction). The summary does not include low-level placement parameters or traces.
SlideFormer sustains >95% peak performance (high utilization) on both NVIDIA and AMD GPUs.
Reported sustained peak utilization measurements on experiments run on NVIDIA (e.g., RTX 4090) and AMD GPUs; the summary states >95% peak performance but does not give per-workload/utilization measurement methodology.
SlideFormer supports up to 8× larger batch sizes and up to 6× larger models on the same GPU relative to prior single-GPU baselines.
Reported comparisons to prior single-GPU baselines measuring achievable batch size and model-size capacity on the same GPU; exact baselines, workloads, and experimental configurations are not detailed in the summary.
SlideFormer reduces peak CPU and GPU memory usage by approximately 2× (roughly halving memory requirements).
Authors report peak memory measurements showing about a 2× reduction in both GPU and CPU memory compared to baselines; memory accounting method and baselines are not fully specified in the summary.
SlideFormer achieves 1.40×–6.27× higher throughput versus baseline systems.
Quantitative evaluation comparing throughput (reported as tokens/sec or updates/sec) against state-of-the-art single-GPU and multi-GPU fine-tuning pipelines (baselines are unnamed in the summary). Measurements reported across single-GPU experiments (hardware includes RTX 4090 and AMD GPUs).
SlideFormer enables fine-tuning very large LLMs (reported up to 123B+ parameters) on a single GPU (e.g., RTX 4090).
Authors report experiments and capability claims for single-GPU setups including an NVIDIA RTX 4090; model size stated as 123B+ in the paper summary. Details on exact model family, sequence length, or batch size used for the 123B+ claim are not enumerated in the summary.
The core findings (harm from ToM order mismatches and benefits from A-ToM) are robust to partners beyond LLM-driven agents.
Paper reports robustness checks testing generalization to non-LLM agent classes (details summarized in robustness section); comparisons use the same coordination metrics.
A-ToM recovers coordination performance by aligning its effective ToM depth with partners across a range of multiagent tasks.
Experimental results showing A-ToM achieves coordination levels closer to matched fixed-order pairings across the repeated matrix game, grid navigation tasks, and Overcooked when facing partners with different fixed ToM depths.
An adaptive ToM (A-ToM) agent that infers its partner's ToM order from prior interactions and conditions its predictions and actions on that estimate restores alignment and improves coordination.
Implemented A-ToM (estimation from interaction history + conditioning of partner-action predictions) and evaluated it against fixed-order agents in the four environments; reported improvements in coordination metrics when A-ToM paired with partners of varying ToM orders.
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.
Demand will shift toward AI systems that provide verifiable, contestable reasoning in regulated/high‑stakes sectors (healthcare, law, finance, public policy).
Economic argument and market prediction in the paper; speculative without market data or forecasting models presented.
This approach supports collaborative reasoning ('with' humans) rather than opaque automation 'for' humans, improving uptake in high‑stakes settings.
Conceptual argument about human-in-the-loop workflows and collaborative roles; no empirical uptake or deployment data presented.
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.
The project demonstrates that high-skill, knowledge-intensive tasks (formal mathematics) can be substantially automated with a heterogeneous AI toolchain, reducing human coding labor while retaining supervisory oversight.
Inference from project outcomes: AI tools produced formal Lean code and discharged lemmas while the reported human supervisor did not write code; single-project evidence (n=1), qualitative and quantitative logs support partial automation.
The formalization finished prior to the final draft of the corresponding informal math paper.
Timing claim reported in the paper comparing formalization completion date to the final draft date of the related math paper (self-reported for the single project).
Effective practices included splitting proofs into abstract (high-level reasoning) and concrete (formalization) parts, having agents perform adversarial self-review, and targeting human review to key definitions and theorem statements.
Process-level recommendations drawn from the project's workflow; paper reports these practices as successful for this single development (n=1 project) based on qualitative assessment.
One mathematician supervised the process over approximately 10 days, reported a human cost of about $200, and wrote no code.
Self-reported human-role summary in the paper: single supervisor, ~10 days supervision time, reported monetary cost ≈ $200, and assertion that the human wrote no code (n=1 human supervisor for the project).
Clear agent identity and provenance simplify liability attribution and enable markets for certified components, attestation services, and compliance tooling.
Legal/economic reasoning about traceability and liability plus systems design suggestions; no legal case analysis or market data presented.
Lifecycle service models (leasing, 'agent as a service', update/maintenance contracts) will become economically important to manage long‑lived physical assets with fast‑moving AI stacks.
Business model reasoning and analogy to service models in other capital‑intensive sectors; no market empirical study or business case analysis provided.
Observability and attestation reduce uncertainty for insurers and regulators, lowering risk premia and insurance costs for agent deployments.
Argument from information economics/insurance theory and analogy to fields where observability reduces asymmetric information; no empirical insurance cost data or pilot programs reported.
Open interoperability standards and agent identities can lower entry barriers, increase competition, and accelerate complementary innovation.
Economic and policy reasoning referencing benefits of standards/open ecosystems; no empirical intervention or controlled comparison provided.