Evidence (8501 claims)
Adoption
5831 claims
Productivity
5043 claims
Governance
4561 claims
Human-AI Collaboration
3605 claims
Labor Markets
2749 claims
Innovation
2697 claims
Org Design
2653 claims
Skills & Training
2112 claims
Inequality
1429 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 440 | 117 | 68 | 507 | 1148 |
| Governance & Regulation | 458 | 216 | 125 | 67 | 883 |
| Research Productivity | 270 | 101 | 34 | 303 | 713 |
| Organizational Efficiency | 441 | 106 | 76 | 43 | 670 |
| Technology Adoption Rate | 347 | 130 | 76 | 45 | 603 |
| Firm Productivity | 324 | 39 | 73 | 13 | 454 |
| Output Quality | 272 | 75 | 27 | 30 | 404 |
| AI Safety & Ethics | 122 | 188 | 46 | 27 | 385 |
| Market Structure | 119 | 134 | 86 | 14 | 358 |
| Decision Quality | 182 | 79 | 41 | 20 | 326 |
| Fiscal & Macroeconomic | 95 | 58 | 34 | 22 | 216 |
| Employment Level | 78 | 37 | 80 | 9 | 206 |
| Skill Acquisition | 104 | 37 | 41 | 9 | 191 |
| Innovation Output | 124 | 12 | 26 | 13 | 176 |
| Firm Revenue | 101 | 38 | 24 | — | 163 |
| Consumer Welfare | 77 | 38 | 37 | 7 | 159 |
| Task Allocation | 93 | 17 | 36 | 8 | 156 |
| Inequality Measures | 29 | 81 | 33 | 6 | 149 |
| Regulatory Compliance | 54 | 61 | 13 | 3 | 131 |
| Task Completion Time | 92 | 8 | 4 | 3 | 107 |
| Error Rate | 45 | 53 | 6 | — | 104 |
| Worker Satisfaction | 48 | 36 | 12 | 8 | 104 |
| Training Effectiveness | 60 | 13 | 12 | 16 | 102 |
| Wages & Compensation | 56 | 16 | 20 | 5 | 97 |
| Team Performance | 50 | 13 | 15 | 8 | 87 |
| Automation Exposure | 28 | 29 | 12 | 7 | 79 |
| Job Displacement | 7 | 45 | 13 | — | 65 |
| Hiring & Recruitment | 42 | 4 | 7 | 3 | 56 |
| Developer Productivity | 38 | 4 | 4 | 3 | 49 |
| Social Protection | 22 | 12 | 7 | 2 | 43 |
| Creative Output | 17 | 8 | 6 | 1 | 32 |
| Skill Obsolescence | 3 | 26 | 2 | — | 31 |
| Labor Share of Income | 12 | 7 | 10 | — | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
To mitigate risks and realize benefits, AI systems in finance/tax should combine AI with human-in-the-loop controls and clear escalation paths.
Prescriptive recommendation grounded in case lessons and literature on safe AI deployment; presented as a best-practice guideline rather than tested intervention.
Technical building blocks leveraged in these deployments include large language models (LLMs), OCR plus structured information extraction, retrieval-augmented generation (RAG) and knowledge bases, and process automation/RPA.
Explicit technical characteristics section and case descriptions in the paper identify these components as core to implementations.
Generative AI is used for risk control and audit functions, including real-time monitoring, fraud detection, KYC/AML screening, and automated exception reporting.
Reported use-cases in the two case organizations and corroborating industry reports discussed in the literature review portion of the paper.
For tax declaration, generative AI enables extraction of tax-relevant facts from invoices and contracts, drafting of tax returns, compliance checks, and scenario simulations.
Case examples and literature synthesis describing OCR + information extraction and LLM-assisted drafting workflows used in practice.
Generative AI is applied to fund management tasks such as cashflow forecasting, anomaly detection, and automated workflows for payments and collections.
Case descriptions and technical mapping in the paper showing implementations at the sharing center and professional services firm level.
Accounting automation use-cases include automated bookkeeping, reconciliations, journal entry suggestion, and error detection using LLMs and document understanding.
Detailed scope mapping and case examples in Xiaomi and Deloitte illustrating these accounting applications; supported by literature review of technical capabilities.
Realizing those AI-driven gains in Vietnam requires legal and institutional redesigns.
Close reading of Vietnam's constitutional provisions, administrative statutes, procedural rules and judicial doctrine (doctrinal legal analysis) combined with comparative lessons from other jurisdictions; no quantitative data.
A supplemental theological differentiator probe achieved perfect rank-order agreement between the two ceiling judges (Spearman rs = 1.00), supporting judge reliability for the ceiling probe.
Reported Spearman rank correlation rs = 1.00 between Gemini Pro and Copilot Pro on the theological differentiator probe used as a reliability check.
Rigorous research priorities include randomized controlled trials with long-run follow-ups, cost-effectiveness studies, structural adoption models, and validated metrics for feedback quality and learning durability.
Actionable research recommendations produced by the 50-scholar interdisciplinary meeting; prescriptive synthesis rather than empirical results.
CABP (Context-Aware Broker Protocol) extends JSON-RPC with identity-scoped request routing via a six-stage broker pipeline to ensure correct identity and policy propagation.
Design and protocol specification included in the paper; formal description and broker-pipeline semantics documented as a deliverable.
Different model families (Sonnet 4.6 vs. Opus 4.6) exhibit stable, systematic differences in methodological preferences and choice patterns—distinct empirical 'styles'.
Comparison of choice patterns and methodological decisions across agents instantiated with Sonnet 4.6 versus Opus 4.6 within the 150-agent experiment, showing consistent between-family differences in measure selection and estimation procedures.
Agents split on measure choice (e.g., autocorrelation vs. variance-ratio tests; dollar-volume vs. share-volume measures), producing different substantive estimates from the same raw data and hypotheses.
Observed categorical divergences in measure selection across the 150 agents during independent analyses of SPY TAQ (2015–2024); documented alternative test/measure families and corresponding divergent effect estimates for the six hypotheses.
AI-to-AI variation (nonstandard errors, NSEs) across autonomous coding agents produces substantial uncertainty in empirical results analogous to human researcher heterogeneity.
Experimental results from 150 autonomous Claude Code agents (two model families: Sonnet 4.6 and Opus 4.6) independently analyzing the same SPY TAQ data (NYSE TAQ, 2015–2024) on six pre-specified hypotheses; recorded agent-to-agent variation in methodological choices and resulting effect estimates (dispersion measured via IQR and related diagnostics).
Observations span multiple agent platforms (Moltbook, The Colony, 4claw) with more than 167,000 agents interacting as peers.
Author-reported coverage from naturalistic observations across the named platforms during the one-month observation window; count reported as ≈167k agents.
The mechanism generalizes to another field: models trained on economics publication records reach ~70% accuracy on a similar benchmark.
Analogue of the management experiment performed in economics: models fine-tuned on economics journal publication records were evaluated on an economics benchmark and achieved approximately 70% accuracy. (Exact dataset sizes, benchmarks, and train/test splits not specified in the provided text.)
Fine-tuned models trained on publication records each outperform every frontier model and the expert panel; the best single model achieves 59% accuracy on the benchmark.
Language models fine-tuned on historical journal accept/reject records were evaluated on the held-out four-tier benchmark; reported performance shows each fine-tuned model exceeds the frontier-model average and the human-panel baseline, with the best model at 59% accuracy. (Exact training set size and benchmark sample count not specified here.)
Panels of journal editors and editorial board members reach 42% accuracy by majority vote on the same four-tier benchmark.
Human baseline obtained by soliciting judgments from journal editors and editorial board members on the held-out benchmark and computing majority-vote accuracy (reported as 42%). (Number of human raters and benchmark size not given in supplied text.)
Fine-tuning language models on historical journal publication decisions recovers an evaluative "scientific taste" that frontier (zero-shot) models and expert editor panels cannot reliably reproduce.
Fine-tuned models were trained on years of journal publication decisions (institutional accept/reject records) and evaluated on a held-out four-tier benchmark of management research pitches; performance compared to zero-shot evaluations of frontier models and to panels of journal editors (majority-vote). (Sample sizes for training records and held-out benchmark not specified in the provided text.)
An asynchronous sliding-window engine treats the GPU as a sliding compute window and overlaps GPU computation with CPU-side parameter updates and multi-tier I/O to hide data movement and synchronization overheads.
System design and implementation described in the paper: an asynchronous runtime that coordinates GPU kernels, CPU updates, and multi-tier I/O. This is a design/implementation claim rather than a measured outcome; the summary links the design to performance improvements.
The A-ToM mechanism operates by estimating a partner's likely ToM order from interaction history and using that estimate to predict the partner's next action which then informs the agent's policy choices.
Method description and implementation details provided in the paper: estimator over ToM orders based on past interactions + conditional action prediction feeding into decision-making; validated in the reported experiments.
Empirical evaluation was performed across four coordination environments: a repeated matrix game, two grid navigation tasks, and an Overcooked task.
Methods section describes these four benchmark environments used for all reported comparisons between fixed-order agents and A-ToM agents; evaluation metrics were joint payoffs and task-specific success measures.
Modular outputs (question histories, security checks, rubric scores, summaries) enable post-hoc review and explainability.
Architectural design and output artifacts described in the paper (logs and structured outputs per agent); these artifacts provide material for explanation and audit.
Adaptive difficulty and multidimensional evaluation allow dynamic tailoring of questions to candidate performance.
Implementation of adaptive testing logic within the workflow described in the paper, with experiments involving dynamic difficulty adjustment; detailed metrics of adaptation effectiveness are not provided in the summary.
Operating as a pre-processor (rather than modifying the generator) enables modular integration with existing LLMs and provides an explicit decision point for clarification.
Novelty/architecture claim in the paper explaining that C.A.P. runs before generation and therefore can be plugged into existing LLM pipelines; described design rationale (no empirical integration study presented).
C.A.P. verifies semantic alignment between the current expanded prompt and the weighted history and triggers a structured clarification protocol when similarity is below a threshold.
Component-level description: alignment verification via semantic embeddings (cosine similarity) or learned classifiers and threshold-based decision branching to initiate clarification; described protocol templates (no empirical validation provided).
C.A.P. retrieves dialogue history using a time-weighted decay so recent context is prioritized (approximating human conversational focus).
Design description of a 'time-weighted context retrieval' component; authors propose temporal decay functions (e.g., exponential decay, half-life parameter) applied to dialogue-turn embeddings or metadata (no empirical results reported).
C.A.P. is a pre-generation module that expands user utterances to recover omitted premises and implications.
Architecture and methods description in the paper specifying a 'semantic expansion' component; suggested implementations via knowledge-bases or small LLM prompts to generate premises, paraphrases, and implications (no empirical evaluation reported).
Structured argumentation frameworks make chains of inference inspectable and machine-checkable, improving transparency and verifiability of AI outputs.
Argument from formal properties of AFs and representation; no empirical user studies but relies on known formal semantics.
Computational argumentation offers formal, verifiable reasoning representations (argumentation frameworks, attack/support relations).
Established literature on formal argumentation (e.g., Dung-style AFs) and the paper's conceptual description; no new empirical data reported.
The development artifacts are fully transparent and reproducible: the repository includes an archive of 229 human prompts and a git history with 213 commits.
Paper reports counts of prompts (229) and git commits (213) and states these archives are public; these are concrete repository metrics (n=1 development repository).
The Lean kernel provided full machine verification of all formalized statements in the development.
Paper reports 'Full verification by the Lean kernel' for the Lean 4 development; supported by availability of the Lean 4 repository and verified theorem artifacts (n=1 project).
A specialized prover (Aristotle) automatically closed 111 lemmas during the development.
Quantitative verification metric reported in the paper: 111 lemmas automatically closed by Aristotle; claim tied to the Lean development and prover logs (single project count).
The AI-assisted pipeline combined an AI reasoning model (Gemini DeepThink) to generate the proof, an agentic coding tool (Claude Code) to translate the proof to Lean, a specialized automated prover (Aristotle) that closed 111 lemmas, and the Lean kernel to fully verify the result.
Project workflow description and verification metrics in the paper; reported counts and named components (Gemini DeepThink, Claude Code, Aristotle, Lean kernel); repository and logs purportedly document toolchain usage (n=1 project; 111 lemmas closed by Aristotle reported).
A complete formalization in Lean 4 of the equilibrium characterization for the Vlasov–Maxwell–Landau (VML) system was produced through an AI-assisted pipeline.
Single-project artifact: a Lean 4 development containing formal statements, proof scripts and verified theorems reported by the paper (n=1 project); authors report full machine verification by the Lean kernel and provide the repository as public evidence.
In the human–human benchmark, repeated pre-play communication substantially increases cooperation.
Reference benchmark data from Dvorak & Fehrler (2024), human–human sample n = 108, showing higher cooperation under repeated communication relative to less frequent communication; comparison reported in the paper.
Evaluation metrics for the benchmark include task-specific metrics such as win-rate for battling and completion time for speedruns, as well as strategic robustness measures.
Paper's evaluation section lists metrics used: win-rate, completion time, strategic robustness; describes how they are computed and used to compare agents.
Speedrunning Track includes an open-source multi-agent orchestration system and standardized evaluation scenarios for reproducible multi-agent comparisons.
Paper describes and releases an open-source orchestration harness for orchestrating LLMs/agents and provides standardized scenarios and evaluation tools meant for reproducibility.
Community interest in the benchmark was validated by a NeurIPS 2025 competition with 100+ teams and published analyses of winning submissions.
Paper reports organization/validation via a NeurIPS 2025 competition, states participation of 100+ teams, and includes documentation/analyses of top submissions.
The project is a living benchmark: the Battling Track has a live leaderboard and the Speedrunning Track uses self-contained evaluation to ensure reproducibility.
Paper/documentation notes a live leaderboard for Battling and provides self-contained evaluation pipelines/orchestration for Speedrunning intended to support reproducible runs.
Baselines include heuristic rule-based agents, reinforcement-learning (RL) agents trained for specialist play, and LLM-based agents/harnesses for generalist approaches.
Paper presents baseline implementations and experiments spanning heuristic, RL, and LLM-based agents and describes training procedures and architectures used for each baseline category.
The benchmark is split into two complementary tracks: a Battling Track (competitive, partial-observability battles) and a Speedrunning Track (long-horizon RPG tasks with a multi-agent orchestration harness).
Paper structure and dataset descriptions specify two tracks, their scopes, and the inclusion of a multi-agent orchestration system for the Speedrunning Track.
The Battling Track dataset contains more than 20 million recorded battle trajectories.
Paper reports a Battling Track dataset of >20M recorded battle trajectories collected from simulated/match play; size reported explicitly in dataset and methods section.
PokeAgent Challenge is a large, realistic multi-agent benchmark built on Pokemon that stresses partial observability, game-theoretic reasoning, and long-horizon planning simultaneously.
Paper describes design and motivation of the benchmark, detailing two tracks (Battling and Speedrunning) intended to capture partial observability, adversarial/game-theoretic interactions, and long-horizon sequential planning; benchmark implementation built on Pokemon simulator and described task specifications.
iDaVIE's modular architecture supports extensibility (planned features include subcube loading, advanced render modes, video scripting, and collaborative VR sessions).
Paper describes modular architecture and lists planned/possible future features; this is a software design claim rather than an empirical result.
Because iDaVIE is open-source and extensible, software licensing costs are low and marginal adoption costs fall over time.
Paper states iDaVIE is open-source and designed for community-driven enhancements; economic claim based on general properties of open-source software rather than empirical cost accounting.
iDaVIE includes interaction features such as selection, cropping/subcube tools, catalogue overlays, and export back to existing pipelines.
Feature list in paper describing selection, cropping, overlays, in-VR metrics and export functionality; demonstrated integration to export edited masks/subcubes.
Streaming and downsampling pipelines implemented as Unity plug-ins make large volumes interactively viewable in VR while preserving needed detail for inspection.
Technical description of custom Unity plug-ins for streaming/downsampling and on-the-fly statistics; tested on HI cubes (telescopes listed) per the paper.
iDaVIE (v1.0) is a working VR software suite that lets astronomers import, render, inspect, and interactively edit very large 3D data cubes in real time.
Described implementation of iDaVIE v1.0 built on Unity/SteamVR with custom plug-ins for parsing/downsampling and real-time rendering; tested on large 3D spectral (HI) cubes from radio telescopes (MeerKAT, ASKAP, APERTIF) as reported in the paper.
Personalized LLM coaching produced a statistically significant increase in alignment with the normative empathic taxonomy relative to both the video-based non-personalized feedback and control arms.
Pre-registered randomized experiment with three arms; pre-registered analysis reported statistically significant differences favoring personalized coaching on the primary alignment outcome.
A brief, personalized coaching intervention delivered by a large language model significantly improves participants' alignment with normative, idiomatic empathic communication patterns.
Pre-registered randomized controlled trial with three arms (personalized LLM coaching, video-based non-personalized feedback, control). Outcome measured as alignment to a data-driven normative taxonomy via coding/automated measures. Overall corpus and sample context: 968 participants, 2,904 conversations, 33,938 messages used in the study.