Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Design interventions alone are necessary but not sufficient; institutional measures (standards, certification, liability rules) are also important to address harms and market failures.
Economic and policy analysis within the paper arguing for combined design and institutional responses; no empirical evidence demonstrating the comparative effectiveness of these measures.
Controls for personalization, data retention, opt-out, and escalation to human assistance are important interface affordances to mitigate risks in conversational AI.
Design heuristics and normative arguments from the paper and related literature; no empirical evaluation of these controls provided.
Real-time uncertainty/credibility signals and easy access to provenance (citations) should be provided to users to improve trust calibration.
Design recommendation grounded in literature review and suggested best practices; the paper recommends A/B tests and lab/field experiments as future work rather than reporting results.
Ethical front-end design—explicit disclosure of AI identity, capability limits, uncertainty cues, provenance, user controls, and escalation paths—can reduce harms and important market failures in AI-enabled interactions.
Normative and design-oriented recommendation supported by design heuristics and prior literature; no empirical trials reported showing quantified harm reduction.
Natural conversational style lowers friction and raises engagement and productivity.
Argument derived from literature synthesis and comparative analysis of conversational norms vs. human dialogue; no original empirical measurements reported in the paper.
Combining negative constraints with sparse preference signals yields better tradeoffs (safety plus helpfulness) than preference-only training.
Conceptual claim supported by qualitative comparisons and references to hybrid approaches in the literature (some constitutional/hybrid methods); the paper advocates this as a practical strategy and cites limited empirical indications.
Training primarily on negative constraints can reduce sycophancy and produce more stable adherence to rules compared to preference-only training.
Paper combines theoretical reasoning with cited empirical instances (e.g., constraint-based or constitutional methods) that report improved harmlessness/constraint adherence. The claim is stated as both theoretical expectation and supported by selected empirical reports rather than a comprehensive controlled comparison.
Negative constraints (explicit prohibitions or dispreferred labels) are often discrete, finitely specifiable, and independently verifiable, enabling models to converge to stable boundaries via falsification-style learning.
Theoretical/epistemological argument drawing on Popperian falsification and the paper's constructed structural model contrasting constraint and preference spaces. Empirical support is indirectly cited via methods like Constitutional AI that operationalize rule-like constraints.
Negative-only feedback (training on dispreferred or negative samples) can match or exceed preference-based RLHF (e.g., PPO/RLHF) on downstream tasks such as mathematical reasoning and harmlessness benchmarks.
Synthesis of recent empirical methods cited in the paper (examples named: Negative Sample Reinforcement, Distributional Dispreference Optimization, Constitutional AI) reporting parity or improvements versus PPO/RLHF on tasks like math reasoning and harmlessness. The paper aggregates published results rather than presenting a single new large-scale controlled experiment; specific sample sizes and exact experimental protocols vary by cited work and are not uniformly reported in the paper.
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.
Demand will shift toward AI systems that provide verifiable, contestable reasoning in regulated/high‑stakes sectors (healthcare, law, finance, public policy).
Economic argument and market prediction in the paper; speculative without market data or forecasting models presented.
This approach supports collaborative reasoning ('with' humans) rather than opaque automation 'for' humans, improving uptake in high‑stakes settings.
Conceptual argument about human-in-the-loop workflows and collaborative roles; no empirical uptake or deployment data presented.
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.
The project demonstrates that high-skill, knowledge-intensive tasks (formal mathematics) can be substantially automated with a heterogeneous AI toolchain, reducing human coding labor while retaining supervisory oversight.
Inference from project outcomes: AI tools produced formal Lean code and discharged lemmas while the reported human supervisor did not write code; single-project evidence (n=1), qualitative and quantitative logs support partial automation.
The formalization finished prior to the final draft of the corresponding informal math paper.
Timing claim reported in the paper comparing formalization completion date to the final draft date of the related math paper (self-reported for the single project).
Effective practices included splitting proofs into abstract (high-level reasoning) and concrete (formalization) parts, having agents perform adversarial self-review, and targeting human review to key definitions and theorem statements.
Process-level recommendations drawn from the project's workflow; paper reports these practices as successful for this single development (n=1 project) based on qualitative assessment.
One mathematician supervised the process over approximately 10 days, reported a human cost of about $200, and wrote no code.
Self-reported human-role summary in the paper: single supervisor, ~10 days supervision time, reported monetary cost ≈ $200, and assertion that the human wrote no code (n=1 human supervisor for the project).
Clear agent identity and provenance simplify liability attribution and enable markets for certified components, attestation services, and compliance tooling.
Legal/economic reasoning about traceability and liability plus systems design suggestions; no legal case analysis or market data presented.
Lifecycle service models (leasing, 'agent as a service', update/maintenance contracts) will become economically important to manage long‑lived physical assets with fast‑moving AI stacks.
Business model reasoning and analogy to service models in other capital‑intensive sectors; no market empirical study or business case analysis provided.
Observability and attestation reduce uncertainty for insurers and regulators, lowering risk premia and insurance costs for agent deployments.
Argument from information economics/insurance theory and analogy to fields where observability reduces asymmetric information; no empirical insurance cost data or pilot programs reported.
Open interoperability standards and agent identities can lower entry barriers, increase competition, and accelerate complementary innovation.
Economic and policy reasoning referencing benefits of standards/open ecosystems; no empirical intervention or controlled comparison provided.
Design choices will shape capital intensity and replacement cycles; architectures that support upgradeability and modularity lower expected upgrade costs and stranded‑asset risk.
Economic reasoning and analogy to modular design benefits in other industries; conceptual argument without empirical capital‑allocation data or simulations.
Architectural components such as agentic identity and attestation, secure communication protocols, semantic layers and interchange formats, policy engines, and observability pipelines are necessary to enable safe, economic multi‑agent ecosystems.
Architectural blueprint proposed via conceptual systems design; justification by analogy to existing security/identity/semantic frameworks; no empirical testing reported.
Design principles — modularity, clear agentic identity, secure agent‑to‑agent communication, policy‑governed runtimes, semantic interoperability, and observability/governance frameworks — will mitigate the architectural risks identified.
Normative systems design proposition grounded in systems engineering reasoning and historical lessons; no experimental validation or deployment studies provided.
New capabilities (edge hardware, sensing, connectivity, and AI) now enable agents that not only sense/report but also perceive, reason, and act autonomously and cooperatively in real time.
Technological trend synthesis and systems reasoning; examples of mature edge hardware and advances in real‑time ML are used illustratively; no experimental validation provided.
Treating evolution, trust, and interoperability as first‑class requirements (rather than afterthoughts) is essential to avoid costly lock‑in, premature ossification, fragmentation, and negative externalities observed with IoT.
Normative prescription motivated by historical/comparative analysis of Internet and IoT (qualitative examples of fragmentation and lock‑in); no controlled study or quantitative validation presented.
The next phase of the Internet will be the "Internet of Physical AI Agents" — distributed, long-lived, embodied systems that perceive, reason, and act autonomously in real time.
Predictive/conceptual argument based on observed technological trends (advances in edge hardware, sensing, connectivity, and AI). Position paper with historical/comparative reasoning and illustrative examples; no primary empirical dataset or quantified projection.
Governance should be hybrid and structured: legal/regulatory frameworks (e.g., EU AI Act), technical standards (ISO safety norms), and crisis-management practices must be combined to allocate responsibilities and intervention authority.
Policy and standards synthesis drawing on EU AI Act, ISO standards, and crisis-management literature; prescriptive argument without empirical testing.
Robust resilience stems from 'bounded autonomy': constraining what an AI may decide and when humans must intervene.
Normative proposal grounded in synthesis of safety standards, crisis-management practices, and conceptual arguments; specification of autonomy dimensions (authority scope, temporal limits, performance envelopes, fail-safes).
Extensive simulation experiments across different network topologies and attacker/defense scenarios validate both the FJ modeling of LLM-MAS and the effectiveness of the trust-adaptive defense.
Multiple simulation studies reported in the paper that vary network density, trust matrices, attacker stubbornness/persuasiveness, and defense strategies; validation claims stem from consistent patterns observed across these simulated settings. (The summary does not list the number of experimental runs or statistical reporting.)
A trust-adaptive defense that dynamically reduces trust in agents suspected of adversarial behavior can limit adversarial influence while preserving cooperative performance better than static trust-lowering strategies.
Implemented a trust-adaptive mechanism and evaluated it in simulation experiments across multiple network topologies and attack/defense scenarios, reporting reductions in adversarial sway with preserved task performance compared to naïve trust reduction. (Exact experimental counts and numeric effect sizes not provided in the summary.)
Increasing the number of benign agents dilutes an adversary's relative influence and thereby reduces the probability and magnitude of persuasion cascades.
Simulation experiments varying the count of benign agents in networks while measuring adversarial sway and collective opinion outcomes across different topologies. (Summary does not report exact counts or statistical summaries.)
The Friedkin–Johnsen opinion-dynamics model (innate opinions + interpersonal influence weights + stubbornness) closely captures LLM-MAS behavior across settings, both theoretically and empirically.
Modeling: derivation of FJ dynamics for LLM-MAS; Empirical: simulation experiments comparing FJ model predictions to observed LLM-MAS opinion trajectories and final consensus under varied topologies and trust matrices. (Exact goodness-of-fit metrics and sample counts not provided in the summary.)
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
Standardized runtime governance frameworks could lower per-deployment compliance engineering costs and increase diffusion of agentic systems.
Theoretical argument that standardization reduces transaction/engineering costs; suggested market dynamics; no empirical implementation evidence.
A market will develop for third-party governance tools, auditors, and insurers providing policy evaluators, risk calibration, and certification services.
Economic argument and analogy to existing markets (governance-as-a-service, insurance); no empirical evidence presented.
Benchmarking time-sensitivity (via V-DyKnow) can inform procurement decisions: buyers should assess models on their ability to handle temporally sensitive information, not just static benchmarks.
Paper's recommendations and implications section arguing for procurement practices informed by V-DyKnow evaluations.
The authors provide an operational inventory and conversation-analysis tool (the 28-code instrument) that can be reused for monitoring and mitigation by researchers, firms, and regulators.
Paper includes the codebook and describes its application as a re-usable monitoring/analysis instrument; proposed adoption discussed in implications.