The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (8501 claims)

Adoption
5831 claims
Productivity
5043 claims
Governance
4561 claims
Human-AI Collaboration
3605 claims
Labor Markets
2749 claims
Innovation
2697 claims
Org Design
2653 claims
Skills & Training
2112 claims
Inequality
1429 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 440 117 68 507 1148
Governance & Regulation 458 216 125 67 883
Research Productivity 270 101 34 303 713
Organizational Efficiency 441 106 76 43 670
Technology Adoption Rate 347 130 76 45 603
Firm Productivity 324 39 73 13 454
Output Quality 272 75 27 30 404
AI Safety & Ethics 122 188 46 27 385
Market Structure 119 134 86 14 358
Decision Quality 182 79 41 20 326
Fiscal & Macroeconomic 95 58 34 22 216
Employment Level 78 37 80 9 206
Skill Acquisition 104 37 41 9 191
Innovation Output 124 12 26 13 176
Firm Revenue 101 38 24 163
Consumer Welfare 77 38 37 7 159
Task Allocation 93 17 36 8 156
Inequality Measures 29 81 33 6 149
Regulatory Compliance 54 61 13 3 131
Task Completion Time 92 8 4 3 107
Error Rate 45 53 6 104
Worker Satisfaction 48 36 12 8 104
Training Effectiveness 60 13 12 16 102
Wages & Compensation 56 16 20 5 97
Team Performance 50 13 15 8 87
Automation Exposure 28 29 12 7 79
Job Displacement 7 45 13 65
Hiring & Recruitment 42 4 7 3 56
Developer Productivity 38 4 4 3 49
Social Protection 22 12 7 2 43
Creative Output 17 8 6 1 32
Skill Obsolescence 3 26 2 31
Labor Share of Income 12 7 10 29
Worker Turnover 10 12 3 25
To mitigate risks and realize benefits, AI systems in finance/tax should combine AI with human-in-the-loop controls and clear escalation paths.
Prescriptive recommendation grounded in case lessons and literature on safe AI deployment; presented as a best-practice guideline rather than tested intervention.
high positive Explore the Impact of Generative AI on Finance and Taxation safety/accuracy of outputs, reduction in erroneous autonomous actions
Technical building blocks leveraged in these deployments include large language models (LLMs), OCR plus structured information extraction, retrieval-augmented generation (RAG) and knowledge bases, and process automation/RPA.
Explicit technical characteristics section and case descriptions in the paper identify these components as core to implementations.
high positive Explore the Impact of Generative AI on Finance and Taxation capability enabling: natural language understanding, document extraction accurac...
Generative AI is used for risk control and audit functions, including real-time monitoring, fraud detection, KYC/AML screening, and automated exception reporting.
Reported use-cases in the two case organizations and corroborating industry reports discussed in the literature review portion of the paper.
high positive Explore the Impact of Generative AI on Finance and Taxation timeliness of monitoring, fraud detection rate, KYC/AML screening coverage, exce...
For tax declaration, generative AI enables extraction of tax-relevant facts from invoices and contracts, drafting of tax returns, compliance checks, and scenario simulations.
Case examples and literature synthesis describing OCR + information extraction and LLM-assisted drafting workflows used in practice.
high positive Explore the Impact of Generative AI on Finance and Taxation accuracy and speed of tax fact extraction, draft return quality, compliance-chec...
Generative AI is applied to fund management tasks such as cashflow forecasting, anomaly detection, and automated workflows for payments and collections.
Case descriptions and technical mapping in the paper showing implementations at the sharing center and professional services firm level.
high positive Explore the Impact of Generative AI on Finance and Taxation cashflow forecast accuracy, anomaly detection precision/recall, automation rate ...
Accounting automation use-cases include automated bookkeeping, reconciliations, journal entry suggestion, and error detection using LLMs and document understanding.
Detailed scope mapping and case examples in Xiaomi and Deloitte illustrating these accounting applications; supported by literature review of technical capabilities.
high positive Explore the Impact of Generative AI on Finance and Taxation functionality/performance in accounting tasks: bookkeeping accuracy, reconciliat...
Realizing those AI-driven gains in Vietnam requires legal and institutional redesigns.
Close reading of Vietnam's constitutional provisions, administrative statutes, procedural rules and judicial doctrine (doctrinal legal analysis) combined with comparative lessons from other jurisdictions; no quantitative data.
high positive ARTIFICIAL INTELLIGENCE AND ADMINISTRATIVE GOVERNANCE: A CRI... feasibility of AI deployment (legal/institutional compatibility enabling efficie...
A supplemental theological differentiator probe achieved perfect rank-order agreement between the two ceiling judges (Spearman rs = 1.00), supporting judge reliability for the ceiling probe.
Reported Spearman rank correlation rs = 1.00 between Gemini Pro and Copilot Pro on the theological differentiator probe used as a reliability check.
high positive Literary Narrative as Moral Probe : A Cross-System Framework... Spearman rank-order agreement (rs) between the two ceiling judges on the theolog...
Rigorous research priorities include randomized controlled trials with long-run follow-ups, cost-effectiveness studies, structural adoption models, and validated metrics for feedback quality and learning durability.
Actionable research recommendations produced by the 50-scholar interdisciplinary meeting; prescriptive synthesis rather than empirical results.
high positive The Future of Feedback: How Can AI Help Transform Feedback t... existence and quality of RCTs and long-run studies; availability of validated me...
CABP (Context-Aware Broker Protocol) extends JSON-RPC with identity-scoped request routing via a six-stage broker pipeline to ensure correct identity and policy propagation.
Design and protocol specification included in the paper; formal description and broker-pipeline semantics documented as a deliverable.
high positive Bridging Protocol and Production: Design Patterns for Deploy... correctness of identity and policy propagation across broker pipeline (as define...
Different model families (Sonnet 4.6 vs. Opus 4.6) exhibit stable, systematic differences in methodological preferences and choice patterns—distinct empirical 'styles'.
Comparison of choice patterns and methodological decisions across agents instantiated with Sonnet 4.6 versus Opus 4.6 within the 150-agent experiment, showing consistent between-family differences in measure selection and estimation procedures.
high positive Nonstandard Errors in AI Agents frequency/distribution of methodological choices by model family (categorical ch...
Agents split on measure choice (e.g., autocorrelation vs. variance-ratio tests; dollar-volume vs. share-volume measures), producing different substantive estimates from the same raw data and hypotheses.
Observed categorical divergences in measure selection across the 150 agents during independent analyses of SPY TAQ (2015–2024); documented alternative test/measure families and corresponding divergent effect estimates for the six hypotheses.
high positive Nonstandard Errors in AI Agents measure selection (categorical) and resulting substantive effect estimates (cont...
AI-to-AI variation (nonstandard errors, NSEs) across autonomous coding agents produces substantial uncertainty in empirical results analogous to human researcher heterogeneity.
Experimental results from 150 autonomous Claude Code agents (two model families: Sonnet 4.6 and Opus 4.6) independently analyzing the same SPY TAQ data (NYSE TAQ, 2015–2024) on six pre-specified hypotheses; recorded agent-to-agent variation in methodological choices and resulting effect estimates (dispersion measured via IQR and related diagnostics).
high positive Nonstandard Errors in AI Agents agent-to-agent variation in methodological choices and effect estimates (dispers...
Observations span multiple agent platforms (Moltbook, The Colony, 4claw) with more than 167,000 agents interacting as peers.
Author-reported coverage from naturalistic observations across the named platforms during the one-month observation window; count reported as ≈167k agents.
high positive When Openclaw Agents Learn from Each Other: Insights from Em... number of agents observed interacting as peers
The mechanism generalizes to another field: models trained on economics publication records reach ~70% accuracy on a similar benchmark.
Analogue of the management experiment performed in economics: models fine-tuned on economics journal publication records were evaluated on an economics benchmark and achieved approximately 70% accuracy. (Exact dataset sizes, benchmarks, and train/test splits not specified in the provided text.)
high positive Machines acquire scientific taste from institutional traces Accuracy on an economics research-pitch benchmark
Fine-tuned models trained on publication records each outperform every frontier model and the expert panel; the best single model achieves 59% accuracy on the benchmark.
Language models fine-tuned on historical journal accept/reject records were evaluated on the held-out four-tier benchmark; reported performance shows each fine-tuned model exceeds the frontier-model average and the human-panel baseline, with the best model at 59% accuracy. (Exact training set size and benchmark sample count not specified here.)
high positive Machines acquire scientific taste from institutional traces Accuracy on the four-tier management research-pitch benchmark
Panels of journal editors and editorial board members reach 42% accuracy by majority vote on the same four-tier benchmark.
Human baseline obtained by soliciting judgments from journal editors and editorial board members on the held-out benchmark and computing majority-vote accuracy (reported as 42%). (Number of human raters and benchmark size not given in supplied text.)
high positive Machines acquire scientific taste from institutional traces Majority-vote accuracy on the four-tier management research-pitch benchmark
Fine-tuning language models on historical journal publication decisions recovers an evaluative "scientific taste" that frontier (zero-shot) models and expert editor panels cannot reliably reproduce.
Fine-tuned models were trained on years of journal publication decisions (institutional accept/reject records) and evaluated on a held-out four-tier benchmark of management research pitches; performance compared to zero-shot evaluations of frontier models and to panels of journal editors (majority-vote). (Sample sizes for training records and held-out benchmark not specified in the provided text.)
high positive Machines acquire scientific taste from institutional traces Ability to predict publication-worthiness as measured by tier prediction accurac...
An asynchronous sliding-window engine treats the GPU as a sliding compute window and overlaps GPU computation with CPU-side parameter updates and multi-tier I/O to hide data movement and synchronization overheads.
System design and implementation described in the paper: an asynchronous runtime that coordinates GPU kernels, CPU updates, and multi-tier I/O. This is a design/implementation claim rather than a measured outcome; the summary links the design to performance improvements.
high positive An Efficient Heterogeneous Co-Design for Fine-Tuning on a Si... system behavior (overlap of compute and I/O / synchronization)
The A-ToM mechanism operates by estimating a partner's likely ToM order from interaction history and using that estimate to predict the partner's next action which then informs the agent's policy choices.
Method description and implementation details provided in the paper: estimator over ToM orders based on past interactions + conditional action prediction feeding into decision-making; validated in the reported experiments.
high positive Adaptive Theory of Mind for LLM-based Multi-Agent Coordinati... accuracy/usefulness of inferred ToM order for partner-action prediction and subs...
Empirical evaluation was performed across four coordination environments: a repeated matrix game, two grid navigation tasks, and an Overcooked task.
Methods section describes these four benchmark environments used for all reported comparisons between fixed-order agents and A-ToM agents; evaluation metrics were joint payoffs and task-specific success measures.
high positive Adaptive Theory of Mind for LLM-based Multi-Agent Coordinati... coordination performance (joint payoff, success rate) as used in experiments
Modular outputs (question histories, security checks, rubric scores, summaries) enable post-hoc review and explainability.
Architectural design and output artifacts described in the paper (logs and structured outputs per agent); these artifacts provide material for explanation and audit.
high positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... interpretability and auditability (availability of logs and structured outputs)
Adaptive difficulty and multidimensional evaluation allow dynamic tailoring of questions to candidate performance.
Implementation of adaptive testing logic within the workflow described in the paper, with experiments involving dynamic difficulty adjustment; detailed metrics of adaptation effectiveness are not provided in the summary.
high positive CoMAI: A Collaborative Multi-Agent Framework for Robust and ... ability to adapt question difficulty and evaluate multiple skill dimensions
Operating as a pre-processor (rather than modifying the generator) enables modular integration with existing LLMs and provides an explicit decision point for clarification.
Novelty/architecture claim in the paper explaining that C.A.P. runs before generation and therefore can be plugged into existing LLM pipelines; described design rationale (no empirical integration study presented).
high positive A Context Alignment Pre-processor for Enhancing the Coherenc... ease of integration / ability to attach to existing generation pipelines
C.A.P. verifies semantic alignment between the current expanded prompt and the weighted history and triggers a structured clarification protocol when similarity is below a threshold.
Component-level description: alignment verification via semantic embeddings (cosine similarity) or learned classifiers and threshold-based decision branching to initiate clarification; described protocol templates (no empirical validation provided).
high positive A Context Alignment Pre-processor for Enhancing the Coherenc... alignment detection (similarity score) and number/rate of triggered clarificatio...
C.A.P. retrieves dialogue history using a time-weighted decay so recent context is prioritized (approximating human conversational focus).
Design description of a 'time-weighted context retrieval' component; authors propose temporal decay functions (e.g., exponential decay, half-life parameter) applied to dialogue-turn embeddings or metadata (no empirical results reported).
high positive A Context Alignment Pre-processor for Enhancing the Coherenc... recency-weighted relevance of retrieved context / retrieval precision for recent...
C.A.P. is a pre-generation module that expands user utterances to recover omitted premises and implications.
Architecture and methods description in the paper specifying a 'semantic expansion' component; suggested implementations via knowledge-bases or small LLM prompts to generate premises, paraphrases, and implications (no empirical evaluation reported).
high positive A Context Alignment Pre-processor for Enhancing the Coherenc... recovered implicit premises / coverage of implied goals in expanded prompt
Structured argumentation frameworks make chains of inference inspectable and machine-checkable, improving transparency and verifiability of AI outputs.
Argument from formal properties of AFs and representation; no empirical user studies but relies on known formal semantics.
high positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... inspectability/traceability of inference chains (auditability)
Computational argumentation offers formal, verifiable reasoning representations (argumentation frameworks, attack/support relations).
Established literature on formal argumentation (e.g., Dung-style AFs) and the paper's conceptual description; no new empirical data reported.
high positive Argumentative Human-AI Decision-Making: Toward AI Agents Tha... existence and machine-checkability of formal inferential chains (inspectability/...
The development artifacts are fully transparent and reproducible: the repository includes an archive of 229 human prompts and a git history with 213 commits.
Paper reports counts of prompts (229) and git commits (213) and states these archives are public; these are concrete repository metrics (n=1 development repository).
high positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... number of human prompts archived (229); number of git commits (213); public avai...
The Lean kernel provided full machine verification of all formalized statements in the development.
Paper reports 'Full verification by the Lean kernel' for the Lean 4 development; supported by availability of the Lean 4 repository and verified theorem artifacts (n=1 project).
high positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... machine-checked verification status of formalized statements (verified/unverifie...
A specialized prover (Aristotle) automatically closed 111 lemmas during the development.
Quantitative verification metric reported in the paper: 111 lemmas automatically closed by Aristotle; claim tied to the Lean development and prover logs (single project count).
high positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... number of lemmas automatically discharged by Aristotle (111)
The AI-assisted pipeline combined an AI reasoning model (Gemini DeepThink) to generate the proof, an agentic coding tool (Claude Code) to translate the proof to Lean, a specialized automated prover (Aristotle) that closed 111 lemmas, and the Lean kernel to fully verify the result.
Project workflow description and verification metrics in the paper; reported counts and named components (Gemini DeepThink, Claude Code, Aristotle, Lean kernel); repository and logs purportedly document toolchain usage (n=1 project; 111 lemmas closed by Aristotle reported).
high positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... composition of toolchain and number of lemmas automatically discharged (111)
A complete formalization in Lean 4 of the equilibrium characterization for the Vlasov–Maxwell–Landau (VML) system was produced through an AI-assisted pipeline.
Single-project artifact: a Lean 4 development containing formal statements, proof scripts and verified theorems reported by the paper (n=1 project); authors report full machine verification by the Lean kernel and provide the repository as public evidence.
high positive Semi-Autonomous Formalization of the Vlasov-Maxwell-Landau E... completeness of formalization / machine-checked verification of the VML equilibr...
In the human–human benchmark, repeated pre-play communication substantially increases cooperation.
Reference benchmark data from Dvorak & Fehrler (2024), human–human sample n = 108, showing higher cooperation under repeated communication relative to less frequent communication; comparison reported in the paper.
high positive Playing Against the Machine: Cooperation, Communication, and... change in cooperation rate associated with repeated communication in human–human...
Evaluation metrics for the benchmark include task-specific metrics such as win-rate for battling and completion time for speedruns, as well as strategic robustness measures.
Paper's evaluation section lists metrics used: win-rate, completion time, strategic robustness; describes how they are computed and used to compare agents.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... evaluation metrics used (win-rate, completion time, strategic robustness)
Speedrunning Track includes an open-source multi-agent orchestration system and standardized evaluation scenarios for reproducible multi-agent comparisons.
Paper describes and releases an open-source orchestration harness for orchestrating LLMs/agents and provides standardized scenarios and evaluation tools meant for reproducibility.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... availability of open-source orchestration code and standardized evaluation scena...
Community interest in the benchmark was validated by a NeurIPS 2025 competition with 100+ teams and published analyses of winning submissions.
Paper reports organization/validation via a NeurIPS 2025 competition, states participation of 100+ teams, and includes documentation/analyses of top submissions.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... number of competing teams (100+), availability of competition analyses/winning s...
The project is a living benchmark: the Battling Track has a live leaderboard and the Speedrunning Track uses self-contained evaluation to ensure reproducibility.
Paper/documentation notes a live leaderboard for Battling and provides self-contained evaluation pipelines/orchestration for Speedrunning intended to support reproducible runs.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... presence of live leaderboard and self-contained evaluation pipelines
Baselines include heuristic rule-based agents, reinforcement-learning (RL) agents trained for specialist play, and LLM-based agents/harnesses for generalist approaches.
Paper presents baseline implementations and experiments spanning heuristic, RL, and LLM-based agents and describes training procedures and architectures used for each baseline category.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... presence and types of baseline agents (heuristic, RL, LLM)
The benchmark is split into two complementary tracks: a Battling Track (competitive, partial-observability battles) and a Speedrunning Track (long-horizon RPG tasks with a multi-agent orchestration harness).
Paper structure and dataset descriptions specify two tracks, their scopes, and the inclusion of a multi-agent orchestration system for the Speedrunning Track.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... benchmark partitioning (presence of Battling and Speedrunning tracks)
The Battling Track dataset contains more than 20 million recorded battle trajectories.
Paper reports a Battling Track dataset of >20M recorded battle trajectories collected from simulated/match play; size reported explicitly in dataset and methods section.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... number of recorded battle trajectories (>20,000,000)
PokeAgent Challenge is a large, realistic multi-agent benchmark built on Pokemon that stresses partial observability, game-theoretic reasoning, and long-horizon planning simultaneously.
Paper describes design and motivation of the benchmark, detailing two tracks (Battling and Speedrunning) intended to capture partial observability, adversarial/game-theoretic interactions, and long-horizon sequential planning; benchmark implementation built on Pokemon simulator and described task specifications.
high positive The PokeAgent Challenge: Competitive and Long-Context Learni... benchmark task characteristics (partial observability, game-theoretic complexity...
iDaVIE's modular architecture supports extensibility (planned features include subcube loading, advanced render modes, video scripting, and collaborative VR sessions).
Paper describes modular architecture and lists planned/possible future features; this is a software design claim rather than an empirical result.
high positive iDaVIE v1.0: A virtual reality tool for interactive analysis... software extensibility and planned feature set
Because iDaVIE is open-source and extensible, software licensing costs are low and marginal adoption costs fall over time.
Paper states iDaVIE is open-source and designed for community-driven enhancements; economic claim based on general properties of open-source software rather than empirical cost accounting.
high positive iDaVIE v1.0: A virtual reality tool for interactive analysis... licensing cost implication and marginal adoption costs
iDaVIE includes interaction features such as selection, cropping/subcube tools, catalogue overlays, and export back to existing pipelines.
Feature list in paper describing selection, cropping, overlays, in-VR metrics and export functionality; demonstrated integration to export edited masks/subcubes.
high positive iDaVIE v1.0: A virtual reality tool for interactive analysis... availability and functionality of in-VR interaction and export tools
Streaming and downsampling pipelines implemented as Unity plug-ins make large volumes interactively viewable in VR while preserving needed detail for inspection.
Technical description of custom Unity plug-ins for streaming/downsampling and on-the-fly statistics; tested on HI cubes (telescopes listed) per the paper.
high positive iDaVIE v1.0: A virtual reality tool for interactive analysis... interactive rendering performance and retention of inspection-relevant detail
iDaVIE (v1.0) is a working VR software suite that lets astronomers import, render, inspect, and interactively edit very large 3D data cubes in real time.
Described implementation of iDaVIE v1.0 built on Unity/SteamVR with custom plug-ins for parsing/downsampling and real-time rendering; tested on large 3D spectral (HI) cubes from radio telescopes (MeerKAT, ASKAP, APERTIF) as reported in the paper.
high positive iDaVIE v1.0: A virtual reality tool for interactive analysis... ability to import/render/inspect/edit large 3D data cubes in real time (interact...
Personalized LLM coaching produced a statistically significant increase in alignment with the normative empathic taxonomy relative to both the video-based non-personalized feedback and control arms.
Pre-registered randomized experiment with three arms; pre-registered analysis reported statistically significant differences favoring personalized coaching on the primary alignment outcome.
high positive Practicing with Language Models Cultivates Human Empathic Co... statistical difference in alignment to normative empathic patterns (primary outc...
A brief, personalized coaching intervention delivered by a large language model significantly improves participants' alignment with normative, idiomatic empathic communication patterns.
Pre-registered randomized controlled trial with three arms (personalized LLM coaching, video-based non-personalized feedback, control). Outcome measured as alignment to a data-driven normative taxonomy via coding/automated measures. Overall corpus and sample context: 968 participants, 2,904 conversations, 33,938 messages used in the study.
high positive Practicing with Language Models Cultivates Human Empathic Co... alignment with normative empathic patterns (coding/automated alignment metrics)