The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2320 claims)

Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 373 105 59 439 984
Governance & Regulation 366 172 115 55 718
Research Productivity 237 95 34 294 664
Organizational Efficiency 364 82 62 34 545
Technology Adoption Rate 293 118 66 30 511
Firm Productivity 274 33 68 10 390
AI Safety & Ethics 117 178 44 24 365
Output Quality 231 61 23 25 340
Market Structure 107 123 85 14 334
Decision Quality 158 68 33 17 279
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 88 31 38 9 166
Firm Revenue 96 34 22 152
Innovation Output 105 12 21 11 150
Consumer Welfare 68 29 35 7 139
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 71 10 29 6 116
Worker Satisfaction 46 38 12 9 105
Error Rate 42 47 6 95
Training Effectiveness 55 12 11 16 94
Task Completion Time 76 5 4 2 87
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 16 9 5 48
Job Displacement 5 29 12 46
Social Protection 19 8 6 1 34
Developer Productivity 27 2 3 1 33
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 8 4 9 21
Clear
Innovation Remove filter
Applying this methodology to the U.S. equity market, long-short portfolios formed on the simple linear combination of signals deliver an annualized Sharpe ratio of 3.11.
Empirical backtest/application to the U.S. equity market reported in the paper; specific performance metric (annualized Sharpe) is provided. Sample period, universe, and number of observations not stated in the excerpt.
To mitigate data snooping biases, the closed-loop system imposes strict empirical discipline through out-of-sample validation and economic rationale requirements.
Description of model validation protocol in the paper (use of out-of-sample validation and economic rationale filters); supports claim that these steps are used to reduce data-snooping risk.
high positive Beyond Prompting: An Autonomous Framework for Systematic Fac... mitigation of data-snooping bias (robustness of signals)
The approach operationalizes the model as a self-directed engine that endogenously formulates interpretable trading signals (rather than relying on sequential manual prompts).
Methodological description and implementation details in the paper describing how the model generates signals autonomously and interpretable outputs; empirical example applied to U.S. equity market is referenced to illustrate operation.
high positive Beyond Prompting: An Autonomous Framework for Systematic Fac... interpretability and autonomy of generated trading signals
We develop an autonomous framework for systematic factor investing via agentic AI.
Statement of methodological contribution in the paper (framework description); no sample size or empirical test required for the descriptive claim.
high positive Beyond Prompting: An Autonomous Framework for Systematic Fac... autonomy of investment framework (methodological capability)
Through a comparative analysis of Pax Romana, Pax Britannica, Pax Americana, and the emerging U.S. techno-security architecture, the article demonstrates continuity in the logic of hegemonic control centered on infrastructures.
Comparative historical analysis of four hegemonic/regime examples as described in the paper; methodological approach is comparative and qualitative (no quantitative sample size given).
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... continuity of hegemonic logic across historical regimes
Hegemonic orders can be conceptualized as historically specific logistical regimes — the material basis of hegemony evolves but the underlying logic remains constant: control over the infrastructures that organize global circulation.
Conceptual claim grounded in synthesis of structural power theory, global value chain analysis, and infrastructure studies and illustrated through comparative historical examples (Pax Romana, Pax Britannica, Pax Americana, emerging U.S. techno-security architecture).
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... persistence of strategic logic (control over infrastructures) across historical ...
The article develops a theoretical framework of logistical hegemony to explain how infrastructures, chokepoints, and global production networks structure the exercise of power in the world economy.
Primary claim of the paper: theoretical development drawing on structural power theory, global value chain analysis, and infrastructure studies; conceptual/theoretical argumentation rather than empirical sample-based evidence.
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... control over infrastructures and organization of global circulation
Experiments highlight a reward anatomical structure that balances income, profit, efficiency, fairness, and customer retention, moving beyond income-only goals.
Experimental design / reward engineering reported in paper; claim supported by experiments (no quantitative metrics or sample size given in excerpt).
high positive The Application of Adaptive Reinforcement Learning in Dynami... reward structure balancing multiple objectives (income, profit, efficiency, fair...
Training strength is validated by benchmarking against fixed, rule-based models and cost-plus in controlled experimentation.
Paper reports controlled experiments benchmarking ARL models against fixed/rule-based and cost-plus baselines; specific experimental design and sample sizes not provided in excerpt.
high positive The Application of Adaptive Reinforcement Learning in Dynami... relative performance of ARL training vs. baselines (validation/benchmarking outc...
Inventory challenges are addressed by utilizing a curated dataset that has been enhanced through feature engineering, transformation, and systematic cleaning, providing reliable inputs for training.
Methodological claim about dataset curation and preprocessing used to train ARL agents; no dataset size or quantitative validation reported in excerpt.
high positive The Application of Adaptive Reinforcement Learning in Dynami... quality/reliability of training inputs with respect to inventory representation
Profitability in a dynamic marketplace is enhanced through an Adaptive Reinforcement Learning (ARL)-based pricing framework that utilizes Q-Learning and Deep Q-Networks (DQN) for real-time optimization in response to changing market conditions, competition, and inventory levels.
Paper proposes and experiments with an ARL-based pricing framework (methods include Q-Learning and DQN); validation claimed via benchmarking/controlled experimentation against baselines (details not provided in excerpt).
high positive The Application of Adaptive Reinforcement Learning in Dynami... profitability and pricing optimization in dynamic markets
Dynamic pricing is crucial for maximizing revenue and maintaining competitiveness in markets with fluctuating demand, perishable goods, and diverse customer preferences.
Conceptual claim stated in paper's introduction/motivation; no empirical sample or experiment specified in the statement.
high positive The Application of Adaptive Reinforcement Learning in Dynami... maximizing revenue and maintaining competitiveness
In the long term, big data promotes sustained improvements in individuals’ welfare.
Theoretical long-run growth analysis in the model showing that sustained data sharing leads to long-run welfare improvements (analytic/model-based, no empirical/sample data).
high positive Study on the impact of big data sharing on individuals’ welf... long-term growth of individuals' welfare
There exists an optimal level of data (big data) sharing that achieves the best balance between economic development and privacy, thereby maximizing individuals' welfare.
Analytical optimization within the theoretical macro model: model yields an interior optimum for data-sharing intensity that trades off economic gains and privacy costs (derivation/analytical result; no empirical test).
high positive Study on the impact of big data sharing on individuals’ welf... individuals' welfare maximization via optimal data-sharing level
The Institutional Scaling Law predicts that the next phase transition will be driven not by larger models but by better-orchestrated systems of domain-specific models adapted to specific institutional niches.
Predictive conclusion derived from the Institutional Scaling Law and theoretical analysis in the paper. No empirical validation or sample size reported in the excerpt.
high positive The Institutional Scaling Law: Non-Monotonic Fitness, Capabi... drivers of the next phase transition in AI (orchestration of domain-specific sys...
A Symbiogenetic Scaling correction demonstrates that orchestrated systems of domain-specific models can outperform frontier generalists in their native deployment environments.
Theoretical correction/derivation and comparative analysis within the paper (no empirical sample or quantitative benchmark reported in the excerpt).
high positive The Institutional Scaling Law: Non-Monotonic Fitness, Capabi... performance of orchestrated domain-specific model systems versus frontier genera...
A mixed-methods empirical research agenda is presented, proposing a future PLS-SEM approach to test the mediating role of the cognitive flywheel and the moderating effect of fractal governance on organizational resilience.
Methodological proposal described in the paper (research design and proposed analytic approach); no executed empirical study or sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... organizational_resilience (as mediator/moderator relationships to be tested)
Fractal governance architecture is proposed to mitigate systemic vulnerabilities such as automation bias.
Conceptual proposal of a governance design in the paper; no empirical test or sample provided.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... reduction_in_automation_bias / improvement_in_decision_quality
The cognitive flywheel is the central mechanism of this dynamic capability and can be operationalized (the paper operationalizes the cognitive flywheel).
Theoretical operationalization within the paper (concept definition and proposed operational measures); no empirical measurement or sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... mechanism_operationalization (cognitive_flywheel)
The co-evolutionary dynamic is formalized using coupled non-linear differential equations and time decay integrals.
Mathematical formalization reported in the paper (modeling methods described); no empirical parameter estimation or sample provided.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... existence_of_mathematical_model/formal_framework
Dynamic cognitive advantage arises from the historical, recursive, structural coupling of human semantic intent and machine syntactic processing (a co-evolutionary dynamic).
Conceptual theory introduced and argued in the paper (mechanism-level proposition); formalization provided but no empirical validation.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... competitive_differentiation/innovation_output
Conceptualizing the enterprise as a complex adaptive system operating far from thermodynamic equilibrium provides a more appropriate framing for organizations integrating AI and enables the theory of dynamic cognitive advantage.
Theoretical development and conceptual argumentation within the paper; formal framing rather than empirical test; no sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... competitive_differentiation/innovation_output
We propose a multi-agent discussion framework wherein specialized agents collaboratively process extensive product information, distributing cognitive load to alleviate single-agent attention bottlenecks and capturing critical decision factors through structured dialogue.
Method description: multi-agent discussion architecture described and implemented; claimed to distribute cognitive load and reduce single-agent attention bottlenecks (design + reported behavior).
high positive MALLES: A Multi-agent LLMs-based Economic Sandbox with Consu... reduction of single-agent attention bottlenecks / distributed processing of prod...
To enhance simulation stability, we implement a mean-field mechanism designed to model the dynamic interactions between the product environment and customer populations, effectively stabilizing sampling processes within high-dimensional decision spaces.
Method description: implementation of a mean-field mechanism within the simulator; paper asserts this design stabilizes sampling in high-dimensional decision spaces (method + reported simulation behavior).
high positive MALLES: A Multi-agent LLMs-based Economic Sandbox with Consu... simulation stability / stabilized sampling processes
We introduce a preference learning paradigm in which LLMs are economically aligned via post-training on extensive, heterogeneous transaction records across diverse product categories.
Method description: post-training LLMs on heterogeneous transaction records across product categories to align preferences (methodological / training procedure described).
high positive MALLES: A Multi-agent LLMs-based Economic Sandbox with Consu... ability of models to internalize consumer preferences via post-training
This paper introduces a Multi-Agent Large Language Model-based Economic Sandbox (MALLES) as a unified simulation framework applicable to cross-domain and cross-category scenarios.
Paper description: design and implementation of MALLES, presented as a unified framework leveraging large-scale LLM generalization for cross-domain/cross-category simulation (methodological contribution).
high positive MALLES: A Multi-agent LLMs-based Economic Sandbox with Consu... existence and applicability of MALLES as a unified simulation framework
SOL-ExecBench reframes GPU kernel benchmarking from beating a mutable software baseline to closing the remaining gap to hardware Speed-of-Light.
Conceptual/positioning claim made by the authors about the intended shift in benchmarking perspective enabled by SOL-ExecBench.
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... benchmarking_objective_shift_toward_hardware_efficiency
To support robust evaluation of agentic optimizers, we provide a sandboxed harness with GPU clock locking, L2 cache clearing, isolated subprocess execution, and static analysis-based checks against common reward-hacking strategies.
Method/tool claim in paper describing the provided evaluation harness and its engineered controls (list of features included).
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... evaluation_robustness_and_integrity_of_benchmarking
We report a SOL Score that quantifies how much of the gap between a release-defined scoring baseline and the hardware SOL bound a candidate kernel closes.
Paper defines the SOL Score metric and states its interpretive meaning (fraction of gap closed between baseline and hardware SOL bound).
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... fraction_of_gap_closed_to_hardware_bound
SOL-ExecBench measures performance against analytically derived Speed-of-Light (SOL) bounds computed by SOLAR, our pipeline for deriving hardware-grounded SOL bounds, yielding a fixed target for hardware-efficient optimization.
Methodological claim: introduction of SOLAR pipeline to compute analytic hardware-grounded SOL bounds and use of those bounds as benchmark targets, as described in the paper.
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... proximity_to_hardware_speed_of_light_bounds
The benchmark covers forward and backward workloads across BF16, FP8, and NVFP4, including kernels whose best performance is expected to rely on Blackwell-specific capabilities.
Paper description of benchmark coverage (workload direction and data types; inclusion of kernels tied to Blackwell hardware features).
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... coverage_of_workloads_and_datatypes
We present SOL-ExecBench, a benchmark of 235 CUDA kernel optimization problems extracted from 124 production and emerging AI models spanning language, diffusion, vision, audio, video, and hybrid architectures, targeting NVIDIA Blackwell GPUs.
Paper reports construction of the benchmark with counts: 235 CUDA kernel problems and 124 source models; descriptive dataset claim in the manuscript.
high positive SOL-ExecBench: Speed-of-Light Benchmarking for Real-World GP... benchmark_problem_count_and_coverage
End-to-end verified pipelines can produce provably correct code from informal specifications.
The paper surveys early research demonstrating pipelines that go from informal specifications to formally verified code; the provided text does not include experimental sample sizes or benchmarks.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... provable correctness of generated code
AI-generated postconditions catch real-world bugs missed by prior methods.
Surveyed early research asserted by the paper indicating empirical instances where AI-generated postconditions found bugs that other methods missed; no numeric details provided in the excerpt.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... bugs detected / error detection rate
Interactive test-driven formalization improves program correctness.
Paper surveys early research that reportedly demonstrates this effect (described as 'interactive test-driven formalization that improves program correctness'); the excerpt does not include specific study details or sample sizes.
The central bottleneck is validating specifications: since there is no oracle for specification correctness other than the user, we need semi-automated metrics that can assess specification quality with or without code, through lightweight user interaction and proxy artifacts such as tests.
Analytical claim and research agenda item in the paper; motivates need for new metrics and interaction designs. No empirical validation or sample size reported in the excerpt.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... ability to validate specification correctness / specification quality
Intent formalization offers a tradeoff spectrum suitable to the reliability needs of different contexts: from lightweight tests that disambiguate likely misinterpretations, through full functional specifications for formal verification, to domain-specific languages from which correct code is synthesized automatically.
Conceptual framework proposed in the paper describing a spectrum of specification formality; presented as an argument rather than an empirical finding, with no sample sizes provided in the excerpt.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... suitability of specification approaches for reliability requirements
Intent formalization — translating informal user intent into checkable formal specifications — is the key challenge that will determine whether AI makes software more reliable or merely more abundant.
Normative argument presented by the authors as the central thesis of the paper; no empirical study or sample size cited in the provided text.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... software reliability (correctness relative to user intent)
Agentic AI systems can now generate code with remarkable fluency.
Authoritative assertion in the paper based on contemporary observations of large code-generating models; no empirical sample size or benchmark numbers reported in the text provided.
high positive Intent Formalization: A Grand Challenge for Reliable Coding ... code generation fluency / ability to produce code
This paper employs large language models to conduct semantic analysis on the text of annual reports from Chinese A-share listed companies from 2006 to 2024.
Methodological statement in the abstract describing use of LLM-based semantic analysis on annual report texts spanning 2006–2024.
high positive The Spillover Effects of Peer AI Rinsing on Corporate Green ... methodological approach (use of LLMs for semantic analysis)
The paper recommends that the government design targeted support tools to 'enhance market returns and alleviate financing constraints', adopt a differentiated regulatory strategy, and establish a disclosure mechanism combining 'professional identification and reputational sanctions' to curb peer AI washing behaviour.
Policy prescriptions derived from empirical findings and simulation results reported in the paper; presented as recommendations in the abstract.
high positive The Spillover Effects of Peer AI Rinsing on Corporate Green ... effectiveness of policy interventions in curbing AI washing and supporting green...
Simulation results indicate that a combination of policy tools can effectively improve market equilibrium (mitigating the negative effects of AI washing).
Simulation exercises reported in the paper (model specification not provided in abstract) testing policy tool combinations and their effects on market equilibrium.
high positive The Spillover Effects of Peer AI Rinsing on Corporate Green ... market equilibrium (improvement in market outcomes related to AI washing and gre...
The study implies policy actions to promote high-quality development based on the finding that innovation and the digital economy now play larger roles in growth.
Authors' discussion/conclusion drawing policy implications from empirical findings (declining capital elasticity, rising TFP and digital economy contribution).
high positive Analysis of China's Economic Growth Drivers: An Empirical St... policy implication for promoting high-quality development
Overall, China's growth model shifted over 2010–2022 from being investment-driven to being innovation-driven.
Synthesis of results: declining capital elasticity, rising TFP contribution, substantial share of digital economy in TFP, and regional patterns reported by the study.
high positive Analysis of China's Economic Growth Drivers: An Empirical St... structural shift in the growth model (investment-driven → innovation-driven)
The study's method is novel because it uses both migrant worker monitoring data and digital-economy proxy indicators, giving a more accurate picture of how labor quality and technological progress affect each other.
Author-reported methodological description: extended Cobb–Douglas approach combined with quality-adjusted labor measures derived from migrant worker monitoring data and proxy indicators for the digital economy.
high positive Analysis of China's Economic Growth Drivers: An Empirical St... measurement accuracy of labor quality and technology interaction (methodological...
Regional analysis shows coastal regions have been driven by innovation, with an estimated (innovation) coefficient of approximately 0.31.
Regional decomposition/estimation reported in the paper's analysis of coastal vs inland regions using the extended production function and digital/labour-quality measures.
high positive Analysis of China's Economic Growth Drivers: An Empirical St... innovation-related elasticity/coefficient in coastal regions (≈0.31)
The digital economy accounted for 40% of the observed increase in TFP (i.e., made up 40% of the TFP contribution).
Attribution within the growth decomposition from the extended production function, where digital economy indicators are included and their contribution to TFP is estimated.
high positive Analysis of China's Economic Growth Drivers: An Empirical St... share of TFP contribution attributable to the digital economy
The contribution rate of total factor productivity (TFP) rose from 18% to 26% between the earlier and later periods.
Decomposition of growth using the extended Cobb–Douglas production function for China over 2010–2022, reporting TFP contribution rates for the two periods.
high positive Analysis of China's Economic Growth Drivers: An Empirical St... TFP contribution rate to economic growth
The paper proposes design principles for effective, accountable, and adaptive sandboxes to contribute to debates on experimentalism in AI governance.
Stated contribution of the paper (descriptive claim about content; abstract does not list the principles or empirical testing).
high positive Experimentalism beyond ex ante regulation: A law and economi... existence and articulation of design principles for RSs
Regulatory sandboxes (RSs) have emerged as a potential solution to AI regulatory challenges.
Descriptive observation and normative framing within the paper; contextual reference to the EU AI Act's treatment of sandboxes (no empirical sample reported in the abstract).
high positive Experimentalism beyond ex ante regulation: A law and economi... adoption/emergence of RSs as a governance mechanism for AI