Evidence (7953 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
API providers need request-level monetization with programmatic spend governance.
Normative recommendation in the paper (argumentation rather than empirical evidence).
Autonomous agents are moving beyond simple retrieval tasks to become economic actors that invoke APIs, sequence workflows, and make real-time decisions.
Framing statement / literature-motivated claim in the paper's introduction (qualitative argumentation, no experimental sample reported).
The future of transformative transformer-based AI is fundamentally many, not one.
Concluding synthesis and normative prediction based on the paper's theoretical arguments and literature synthesis; no empirical data or quantified projection provided in the excerpt.
Developing diverse AI teams addresses critics' concerns that current models are constrained by past data and lack the creative insight required for innovation.
Argumentative claim drawing on conceptual critique of current models and the proposed remedy of diverse AI teams; supported by referenced disciplinary literatures but no empirical validation provided in the excerpt.
Having a diverse team broadens the search for solutions, delays premature consensus, and allows for the pursuit of unconventional approaches.
Theoretical/argumentative claim referencing literature in complex systems and organizational behavior as support; no quantitative evidence or sample reported in the excerpt.
Deep intellectual breakthroughs should be expected to come from epistemically diverse groups of AI agents working together rather than singular superintelligent agents.
Predictive/theoretical claim motivated by referenced research and formal results in complex systems, organizational behavior, and philosophy of science; no empirical experiment or sample size given in the excerpt.
We should abandon the individual approach if we're hoping for AI to support groundbreaking innovation and scientific discovery.
Normative prescription based on theoretical argument and synthesis of literature from complex systems, organizational behavior, and philosophy of science; no empirical trial or quantified evaluation reported in the excerpt.
AI innovation achieves corporate low-carbon development by reorienting investment toward green assets.
Mechanism analysis reported in the paper (mediation/path analysis) using the same 21,428 firm-year observations; investment reorientation toward green assets identified as a mediation path.
AI innovation achieves corporate low-carbon development by upgrading emission-reducing production processes.
Mechanism analysis reported in the paper (mediation/path analysis) on the 21,428 firm-year sample; production-process upgrades identified as a mediation path.
AI innovation achieves corporate low-carbon development by optimizing low-carbon organizational governance.
Mechanism analysis reported in the paper (mediation/path analysis) using the same sample of 21,428 firm-year observations; paper identifies organizational governance optimization as one of three mediation paths.
With further development, this approach may exceed traditional methods regarding risk accuracy and help drive innovation in the insurance industry.
Forward-looking claim by the authors extrapolating from current prototype results and potential improvements; no empirical evidence provided that it already exceeds traditional methods.
ARQuest shows great potential to improve user satisfaction and streamline insurance processes.
Interpretation based on experimental findings (fewer questions, user preference) and the proposed framework; forward-looking claim rather than a fully established empirical result.
Adaptive versions were preferred by users for their more fluid and engaging experience.
User preference reported from the experiments (qualitative/user feedback or preference metric); specific measures and sample size not provided in excerpt.
Adaptive versions powered by GPT models required fewer questions.
Experimental result reported in paper comparing question counts between adaptive GPT-powered questionnaires and traditional questionnaires; no numeric counts or sample sizes provided in the excerpt.
Techniques such as social media image analysis, geographic data categorization, and Retrieval Augmented Generation (RAG) are used to extract meaningful user insights and guide targeted follow-up questions.
Described methods/techniques used within the ARQuest system implementation in the paper.
The ARQuest framework introduces a new approach to underwriting by using Large Language Models (LLMs) and alternative data sources to create personalized and adaptive questionnaires.
Methodological contribution described in the paper (framework design); description of components and intended function rather than a quantified outcome.
Achieving near-perfect success rates at this minimally sufficient quality level or comparable success rates at superior quality would require several additional years.
Authors' forecast/commentary on timeline beyond the 2029 projection; conditional expectation based on historical pace of improvements.
If recent trends in AI capability growth persist, LLMs will be able to complete most text-related tasks with success rates of, on average, 80%-95% by 2029 at a minimally sufficient quality level.
Longer-term projection contingent on continuation of recent capability growth trends (model-based forecast stated by the authors).
AI success rates for those tasks increase to about 65% by 2025-Q3.
Short-term projection / trend extrapolation reported in the paper (from the ongoing evaluation data).
In 2024-Q2, AI models successfully complete tasks that take humans approximately 3-4 hours with about a 50% success rate.
Empirical measurement/estimate from the ongoing evaluation (reported temporal snapshot for 2024-Q2); based on tasks mapped to human completion time and observed model success rates from the >17,000 evaluations.
AI performance is high and improving rapidly across a wide range of tasks.
Empirical results from the ongoing evaluation of >3,000 tasks and >17,000 evaluations showing high and increasing success/performance metrics.
Substantial evidence that rising tides are the primary form of AI automation.
Patterns observed in the same large-scale evaluation across tasks and human judgments indicating broad-based, continuous capability improvements across many tasks.
Only interventions that reshape risk allocation can plausibly shift stable system-level behaviour.
Argument based on the paper's game-theoretic reasoning and stylised example (theoretical claim; no empirical testing reported in the abstract).
Artificial intelligence (AI) is widely promoted as a promising technological response to healthcare capacity and productivity pressures.
Author assertion in the paper's introduction/abstract, based on literature/policy discourse (no empirical sample or quantitative analysis reported in the abstract).
We open-source the complete benchmark, including scenario specifications, ground truth templates, tool implementations, and evaluation scripts.
Paper statement committing to open-sourcing the benchmark components and artifacts.
We evaluated leading agent frameworks (ReAct, Cursor Agent, Claude Code) paired with frontier LLMs (Claude Sonnet 4.0, GPT-4o, Granite-3.0-8B).
Paper reports extensive evaluations using the listed agent frameworks and LLM models paired together to run the benchmark scenarios.
Execution-based evaluators were implemented with task-commensurate metrics: MAE/RMSE for regression, F1-score for classification, and categorical matching for health assessments.
Paper statement describing the evaluation methodology and the specific metrics used for regression, classification and health-assessment tasks.
We construct 65 specialized tools across two MCP servers to enable interactions for the benchmark.
Paper statement reporting the number of specialized tools (65) and that they are deployed across two MCP servers as part of the benchmark implementation.
The benchmark encompasses 75 expert-curated scenarios spanning 7 industrial asset classes (turbofan engines, bearings, electric motors, gearboxes, aero-engines) across 5 core task categories: Remaining Useful Life (RUL) Prediction, Fault Classification, Engine Health Analysis, Cost-Benefit Analysis, and Safety/Policy Evaluation.
Explicit statement in paper listing the number of scenarios (75), number of asset classes (7) and enumerating the 5 task categories; benchmark construction described by authors.
PHMForge is the first comprehensive benchmark specifically designed to evaluate LLM agents on Prognostics and Health Management (PHM) tasks through realistic interactions with domain-specific MCP servers.
Paper statement introducing PHMForge as a benchmark and describing its construction to evaluate LLM agents via MCP servers; benchmark implementation is presented in the manuscript.
Improvements in operational resilience enhance firms' capacity for sustainable development.
Further analysis in the paper showing a positive relationship between OR improvements and indicators of firms' sustainable development capacity.
The enabling effect of AI on operational resilience is more pronounced for capital-intensive enterprises.
Heterogeneity/subsample analysis showing larger AI effects on OR for capital-intensive firms.
The enabling effect of AI on operational resilience is more pronounced for technology-intensive enterprises.
Heterogeneity/subsample tests reported in the paper indicating stronger AI effects on OR for technology-intensive firms.
The enabling effect of AI on operational resilience is more pronounced for enterprises in the growth stage.
Heterogeneity/subsample analysis showing larger AI-induced OR gains among firms classified as in the growth stage.
The enabling effect of AI on operational resilience is more pronounced for enterprises located in the coastal eastern region.
Heterogeneity/subsample analysis reported in the paper showing larger AI effects for firms in the coastal eastern region compared to other regions.
AI promotes operational resilience by optimizing supply chain allocation performance.
Mechanism tests in the paper linking AI adoption to improved supply chain allocation/performance metrics, which are associated with higher OR.
Application of AI significantly enhances corporate operational resilience (OR).
Staggered DID estimation exploiting AIIAPZ policy as quasi-natural experiment on Chinese A-share listed manufacturing firms (2012–2023); main regression results reported as significant.
Voluntary safety commitments can sustain cooperative (higher-quality) outcomes when they are observable and credible.
Theoretical analysis of an equilibrium with voluntary, observable commitments: when commitments are binding/credible and observable, firms can coordinate to avoid preemption and achieve cooperative outcomes.
Minimum quality standards can implement the first-best outcome.
Theoretical policy analysis within the model: imposing a minimum quality threshold for release is shown to align private incentives with the social optimum, implementing the first-best.
Employment reallocation exerted a narrowing influence on the gender wage gap, particularly in 2005–2010.
Dynamic shift-share decomposition attributing a portion of changes in the gender wage gap to employment reallocation effects, with a notable equalizing contribution in 2005–2010.
Displaced women reallocated substantially toward non-routine interpersonal roles (occupational upgrading).
Observed occupational transition patterns in decomposition results showing female movement into non-routine interpersonal occupations; authors interpret this as occupational upgrading.
Design implication: adaptive AI coaching systems should align support intensity with individual readiness, rather than assuming universal effectiveness.
Authors' design recommendation derived from experimental results showing heterogeneous effects by personality profile.
The system is in production, serving 21 industry verticals with 650+ agents.
Deployment claim reported in paper (production system metrics: number of verticals and agents).
We propose a framework for output-side ontological validation (response validation, reasoning verification, compliance checking).
Proposed framework described in paper (conceptual/procedural proposal; not described as empirically validated in abstract).
We introduce ontology-constrained tool discovery via SQL-pushdown scoring.
Methodological/implementation contribution described in the paper (technical mechanism introduced).
Improvements from ontology coupling are greatest where LLM parametric knowledge is weakest—particularly in Vietnam-localized domains.
Observed pattern reported from the controlled experiment across the five industries, with stronger improvements in Vietnam-localized domains (no per-industry sample sizes reported in abstract).
Ontology-coupled agents significantly outperform ungrounded agents on Role Consistency (p < .001, W = .614).
Controlled experiment with 600 runs; statistical test reported (p-value and W statistic provided in abstract).
Ontology-coupled agents significantly outperform ungrounded agents on Regulatory Compliance (p = .003, W = .318).
Controlled experiment with 600 runs; statistical test reported (p-value and W statistic provided in abstract).
Ontology-coupled agents significantly outperform ungrounded agents on Metric Accuracy (p < .001, W = .460).
Controlled experiment with 600 runs; statistical test reported (p-value and W statistic provided in abstract).
We formalize the concept of asymmetric neurosymbolic coupling, wherein symbolic ontological knowledge constrains agent inputs (context assembly, tool discovery, governance thresholds) while proposing mechanisms for extending this coupling to constrain agent outputs (response validation, reasoning verification, compliance checking).
Theoretical/formalization contribution described in the paper (conceptual and methodological development).