Evidence (3016 claims)
Adoption
5187 claims
Productivity
4472 claims
Governance
4082 claims
Human-AI Collaboration
3016 claims
Labor Markets
2450 claims
Org Design
2305 claims
Innovation
2290 claims
Skills & Training
1920 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 437 | 982 |
| Governance & Regulation | 366 | 172 | 114 | 55 | 717 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 290 | 115 | 66 | 27 | 502 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 121 | 85 | 14 | 332 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 68 | 8 | 28 | 6 | 110 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 74 | 5 | 4 | 1 | 84 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 15 | 9 | 5 | 47 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Human Ai Collab
Remove filter
Isolating sensitive logic (scoring rubrics, adaptive difficulty rules) from free-text generation reduces the attack surface.
Design principle implemented in the architecture (separation of concerns between agents); claimed benefit in the paper. Empirical validation details (quantitative reduction in successful attacks) are not provided in the summary.
CoMAI implements multi-layered defenses against prompt-injection and other prompt-level attacks via a dedicated security agent and constrained state transitions.
System design (a dedicated security/validation agent and a finite-state machine enforcing information flow) and reported security testing that included prompt-injection/adversarial inputs to probe defenses.
Candidate satisfaction with CoMAI was 84.41%.
Reported experimental metric in the paper summary; likely derived from post-interview surveys, but survey design, sample size, and response rates are not specified in the summary.
In experiments CoMAI achieved 83.33% recall.
Reported experimental metric in the paper summary; no information provided on how recall was computed (e.g., per-class vs. overall), sample sizes, or confidence intervals.
In experiments CoMAI achieved 90.47% accuracy.
Reported experimental metric in the paper summary. The underlying dataset size, class balance, and baseline comparison details are not provided in the summary.
CoMAI outperforms monolithic LLM-based assessments on robustness, fairness, and interpretability.
Comparative framing and reported experiments in the paper claiming improved robustness, fairness, and interpretability relative to single-agent LLM baselines; however, baseline specifics, dataset sizes, and statistical tests are not disclosed in the provided summary.
The clarification protocol elicits missing premises or confirms intent rather than producing an ill-aligned response.
Paper describes structured clarification templates (binary checks, multi-choice scaffolds, short clarifying questions) intended to elicit missing information; this is a design assertion without reported user-study evidence.
There are potential welfare gains from improved decision quality and trust in automation, particularly where human oversight remains required.
Conceptual welfare analysis; no welfare quantification or simulations provided.
Structured AFs can reduce information asymmetry by making reasoning traceable, thereby lowering search and verification costs in transactions and contracting.
Economic reasoning drawing on information-asymmetry theory; no empirical transaction-cost measurements given.
Firms offering argumentatively transparent AI can obtain competitive advantage and charge premium prices for verifiability and auditability.
Economic reasoning and market-structure inference; no empirical pricing or demand elasticity studies provided.
Demand will shift toward AI systems that provide verifiable, contestable reasoning in regulated/high‑stakes sectors (healthcare, law, finance, public policy).
Economic argument and market prediction in the paper; speculative without market data or forecasting models presented.
This approach supports collaborative reasoning ('with' humans) rather than opaque automation 'for' humans, improving uptake in high‑stakes settings.
Conceptual argument about human-in-the-loop workflows and collaborative roles; no empirical uptake or deployment data presented.
Framing decisions as contestable and revisable (via dialectical challenge and update) increases robustness and trust in AI-supported decision-making.
Conceptual claim arguing that contestability/revision improve robustness and trust; no experimental evidence or user studies provided.
Running formal dialectical/acceptability semantics and dialogue protocols over AFs enables agents that reason with humans through structured debates and revisions.
Conceptual integration of formal semantics (Dung-style, bipolar, weighted) and dialogue protocols; no human-subject studies or system evaluations reported.
Argumentation Framework Synthesis: mined fragments can be combined into coherent formal argumentation frameworks (AFs) with explicit semantics enabling verification and automated inference.
Conceptual algorithmic proposal (graph synthesis, canonicalization, formal semantics); no empirical synthesis results or benchmarks presented.
Argumentation Framework Mining: LLMs and NLP pipelines can be used to extract claims, premises, relations (attack/support), and provenance from text corpora.
Proposed methodological pipeline (fine-tuning/prompting LLMs and IE pipelines); conceptual proposal without implementation details or experimental results.
Combining formal argument structures with LLMs’ ability to mine and generate rich, contextual arguments from unstructured text promises human-aware, verifiable, and trustable AI for high‑stakes domains.
Conceptual synthesis of computational argumentation (formal AFs) and LLM capabilities; no empirical validation or quantified metrics provided.
Integrating computational argumentation with large language models (LLMs) creates a new paradigm—Argumentative Human-AI Decision‑Making—where AI agents participate in dialectical, contestable, and revisable decision processes with humans.
Conceptual / design argument presented in the paper; no empirical implementation or sample; draws on prior work in computational argumentation and capabilities of LLMs.
There will likely be growth in complementary markets for model verification, provenance tracking, legal-AI audits, and human-in-the-loop workflow services.
Market foresight based on identified unmet needs (explainability, verification) and illustrative examples; no market-sizing data.
The project demonstrates that high-skill, knowledge-intensive tasks (formal mathematics) can be substantially automated with a heterogeneous AI toolchain, reducing human coding labor while retaining supervisory oversight.
Inference from project outcomes: AI tools produced formal Lean code and discharged lemmas while the reported human supervisor did not write code; single-project evidence (n=1), qualitative and quantitative logs support partial automation.
The formalization finished prior to the final draft of the corresponding informal math paper.
Timing claim reported in the paper comparing formalization completion date to the final draft date of the related math paper (self-reported for the single project).
Effective practices included splitting proofs into abstract (high-level reasoning) and concrete (formalization) parts, having agents perform adversarial self-review, and targeting human review to key definitions and theorem statements.
Process-level recommendations drawn from the project's workflow; paper reports these practices as successful for this single development (n=1 project) based on qualitative assessment.
One mathematician supervised the process over approximately 10 days, reported a human cost of about $200, and wrote no code.
Self-reported human-role summary in the paper: single supervisor, ~10 days supervision time, reported monetary cost ≈ $200, and assertion that the human wrote no code (n=1 human supervisor for the project).
Governance should be hybrid and structured: legal/regulatory frameworks (e.g., EU AI Act), technical standards (ISO safety norms), and crisis-management practices must be combined to allocate responsibilities and intervention authority.
Policy and standards synthesis drawing on EU AI Act, ISO standards, and crisis-management literature; prescriptive argument without empirical testing.
Robust resilience stems from 'bounded autonomy': constraining what an AI may decide and when humans must intervene.
Normative proposal grounded in synthesis of safety standards, crisis-management practices, and conceptual arguments; specification of autonomy dimensions (authority scope, temporal limits, performance envelopes, fail-safes).
Human–AI chat logs contain more explicit strategy commitments (stated rules) than human–human chats.
Content analysis / coding of natural-language chat logs from the human–AI experiment (human–AI n = 126) and the human–human benchmark (n = 108); coding counts show higher frequency of explicit commitments/statements of rules in human–AI messages.
Human–human subjects converge to Tit‑for‑Tat under one condition and to unconditional cooperation under the repeated-communication condition.
Strategy-estimation and behavioral trajectory analysis from the human–human benchmark (Dvorak & Fehrler 2024; n = 108) reported in the paper, showing condition-dependent convergence to Tit‑for‑Tat and to unconditional cooperation under repeated communication.
Strategy estimation indicates human–AI subjects tend to favor Grim Trigger when allowed pre-play communication.
Strategy-estimation/classification applied to subjects' choices in the human–AI condition with pre-play chat (subset of the human–AI n = 126); inferred strategy prevalence shows elevated assignment to Grim Trigger-type rules.
Version 1.0 marks integration into operational workflows and establishes a base for future capabilities.
Authors report that v1.0 has been used in verification and mask-refinement loops for real datasets (MeerKAT, ASKAP, APERTIF); no detailed deployment metrics provided.
Immersive inspection tools like iDaVIE are complements to automated ML pipelines by helping generate higher-quality labels and curated training examples.
Paper argues conceptual complementarity and cites iDaVIE's use for mask refinement and curated subcube export; no experimental comparison of label quality or downstream ML performance provided.
iDaVIE accelerates inspection-driven parts of astronomy workflows (e.g., mask refinement, verification).
Reported use cases where iDaVIE was used to refine masks and verify sources in real datasets; no measured time-per-task or throughput statistics provided.
iDaVIE has already been integrated into real pipelines (MeerKAT, ASKAP, APERTIF) and used to improve quality control, refine detection masks, and identify new sources.
Author statement of integration and use cases citing verification of HI data cubes from MeerKAT, ASKAP and APERTIF; no quantitative deployment counts or independent validation provided in the text.
There is a need for policies supporting workforce transitions (retraining, portability of skills) and safety/regulation for embodied agents operating in public spaces.
Policy recommendation grounded in anticipated labor and safety risks; proposed but not empirically evaluated.
Benchmarks and tasks that mix observation and intervention (imitation with sparse feedback, active imitation, transfer under domain shift, continual learning streams) are required to evaluate the architecture.
Proposal for evaluation tasks and benchmarks; not empirically validated in the paper.
Embodied robotics experiments are necessary to evaluate real-world constraints such as sample efficiency, physical affordances, and motor learning.
Methodological recommendation recognizing simulation-to-real gaps; no experiments reported.
Simulated environments (procedural, nonstationary), multi-agent social domains, and open-world 3D simulators are appropriate for scalable iteration to test the proposed architecture.
Methodological recommendation and suggested experimental approaches; not tested in the paper.
Neuromodulatory systems and meta-decision circuits in animals provide analogies for implementing meta-control (M) in artificial systems.
Neuroscience analogy cited to motivate architectural choices; not empirically instantiated in the paper.
Developmental trajectories can scaffold gradual competence (from observation to exploratory action) and should be reflected in training curricula.
Argument from developmental biology and learning theory; proposed as a design principle rather than empirically tested here.
Evolution supplies inductive biases and slow structural priors that can be leveraged in artificial learners.
Biological analogy and theoretical suggestion; no empirical experiments presented to quantify effect in AI systems.
The taxonomy and measurement approach provide operational metrics to quantify empathic communication for economic analyses (productivity, customer satisfaction, retention).
Authors propose that their data-driven taxonomy and automated/coding measures can be used as metrics; the paper demonstrates derivation and use in trial outcomes but does not present direct economic outcome measurements.
LLM-generated responses frequently score as more empathic than human-written responses in blinded evaluations.
Blinded evaluations comparing LLM-generated replies to human-written replies using recipient/judge ratings of perceived empathy (reported in blinded tests described in paper). Exact blinded-test sample sizes not specified in the summary but derived from the study's evaluation procedures.
LLMs are more likely to complement human tacit skills than to replace explicit rule‑following jobs; value accrues to workers and firms that integrate model outputs with human judgment and tacit expertise.
Labor‑economics style argument and theoretical reasoning; no empirical labor market analysis provided.
Commoditization via rule extraction is limited; firms that can harness and deploy tacit LLM capabilities will retain economic rents.
Theoretical economic argument based on non‑rule‑encodability; no empirical firm‑level data included.
The highest‑value attributes of LLMs may be inherently non‑decomposable into simple, auditable rules, which increases the value of proprietary, black‑box models and strengthens economies of scale and scope for large model providers.
Economic reasoning and theoretical implications drawn from the central thesis; no empirical market analyses provided.
Some LLM capabilities are tacit, practice‑derived, or 'insight'‑like, akin to the Chinese concept of Wu (sudden insight through practiced skill).
Philosophical framing and analogy to the concept of tacit knowledge (Wu); argumentative rather than empirical support.
The economically valuable capabilities of large language models are precisely those that cannot be fully encoded as a complete, human‑readable set of discrete rules.
Formal, conceptual argument (proof by contradiction) plus qualitative historical case analysis comparing expert systems and LLMs; no new empirical datasets or experiments reported.
Open dataset and code improve reproducibility and lower barriers for follow-up work on applied LLM tools and economic impact studies.
Release of SlideRL dataset (288 rollouts) and code repository; general statement about reproducibility benefits.
Parameter-efficient RL fine-tuning (0.5% of params) can yield large quality gains, implying a potentially high ROI for targeted fine-tuning versus full-model scaling.
Observed empirical gain of +33.1% for the tuned 7B over its untuned base and the 91.2% relative performance vs Claude Opus 4.6; implication drawn about cost-effectiveness of tuning few parameters rather than scaling model size.
The inverse-specification reward—where an LLM attempts to recover the original brief from generated slides—provides a holistic fidelity signal.
Reward design: inverse-specification component implemented and used as part of composite reward; claimed to measure fidelity via recovery accuracy.
Performance on this agentic slide-generation task is driven more by instruction adherence and tool-use compliance than by raw model parameter count.
Cross-model comparison across six models on the 48-task benchmark, with analyses showing instruction adherence and tool-use compliance better predict agent performance than parameter count.