The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
Measurable ROI from GenAI on Azure is mainly driven by improvements in productivity, optimization of operational costs, faster decision making, and increased speed of innovation across business functions.
Reported results from the paper's mixed-method study combining quantitative ROI modelling and cost–benefit analysis plus qualitative synthesis of secondary enterprise case studies.
high positive Measuring Business ROI of Generative AI Adoption on Azure Cl... business Return on Investment (ROI) driven by productivity, cost optimization, d...
Microsoft Azure has become one of the first enterprise-scale platforms facilitating GenAI-driven change.
Statement in the paper's abstract asserting Azure's market position as an early enterprise-scale platform for GenAI.
high positive Measuring Business ROI of Generative AI Adoption on Azure Cl... enterprise-scale platform adoption
This synthesis bridges the gap between values and practice, offering a policy-ready model for secure and sustainable AI governance.
Authors' concluding claim that their integrated governance risk framework and risk-tiering matrix operationalize ethical principles into auditable technical controls and are policy-ready.
high positive AI Governance Risk Tiering for Sustainable Digital Infrastru... policy-readiness and practical applicability of the proposed model
The study aligns its integrated risk-tiering model with Sustainable Development Goal 9 on industry, innovation and infrastructure.
Authors state that the developed integrated risk-tiering model is aligned with SDG 9 as part of the study framing and intended policy relevance.
high positive AI Governance Risk Tiering for Sustainable Digital Infrastru... conceptual alignment of the model with SDG 9
The analysis produced a heat map of governance frameworks, a co-occurrence network of themes, a cluster analysis of framework coverage and an integrated governance risk framework supported by a risk-tiering matrix.
Authors report specific analytical outputs (heat map, co-occurrence network, cluster analysis) and that they developed an integrated governance risk framework with a risk-tiering matrix based on their analysis.
high positive AI Governance Risk Tiering for Sustainable Digital Infrastru... analytical outputs and resultant governance model
Through a comparative analysis of Pax Romana, Pax Britannica, Pax Americana, and the emerging U.S. techno-security architecture, the article demonstrates continuity in the logic of hegemonic control centered on infrastructures.
Comparative historical analysis of four hegemonic/regime examples as described in the paper; methodological approach is comparative and qualitative (no quantitative sample size given).
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... continuity of hegemonic logic across historical regimes
Hegemonic orders can be conceptualized as historically specific logistical regimes — the material basis of hegemony evolves but the underlying logic remains constant: control over the infrastructures that organize global circulation.
Conceptual claim grounded in synthesis of structural power theory, global value chain analysis, and infrastructure studies and illustrated through comparative historical examples (Pax Romana, Pax Britannica, Pax Americana, emerging U.S. techno-security architecture).
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... persistence of strategic logic (control over infrastructures) across historical ...
The article develops a theoretical framework of logistical hegemony to explain how infrastructures, chokepoints, and global production networks structure the exercise of power in the world economy.
Primary claim of the paper: theoretical development drawing on structural power theory, global value chain analysis, and infrastructure studies; conceptual/theoretical argumentation rather than empirical sample-based evidence.
high positive The Logistics of Hegemony: Semiconductor Chokepoints, Global... control over infrastructures and organization of global circulation
The specification provides mechanisms for interoperability between institutions.
Design claim in the specification describing mechanisms enabling institutional interoperability.
high positive Agent Control Protocol: Admission Control for Agent Actions mechanisms enabling interoperability between institutions
ACP operates as an additional layer on top of RBAC and Zero Trust, without replacing them.
Design statement in the specification describing ACP's relationship to existing RBAC and Zero Trust architectures.
high positive Agent Control Protocol: Admission Control for Agent Actions operational layering/interoperability with RBAC and Zero Trust
ACP defines the mechanisms of cryptographic identity, capability-based authorization, deterministic risk evaluation, verifiable chained delegation, transitive revocation, and immutable auditing that a system must implement for autonomous agents to operate under explicit institutional control.
List of mechanisms and required features presented in the specification text.
high positive Agent Control Protocol: Admission Control for Agent Actions presence and definition of specified security/governance mechanisms (cryptograph...
ACP is the admission control layer between agent intent and system state mutation: before any agent action reaches execution, it must pass a cryptographic admission check that validates identity, capability scope, delegation chain, and policy compliance simultaneously.
Explicit behavioural/design claim in the specification text describing the admission-control role and the checks performed prior to action execution.
high positive Agent Control Protocol: Admission Control for Agent Actions cryptographic admission check validating identity, capability scope, delegation ...
ACP is a formal technical specification for governance of autonomous agents in B2B institutional environments.
Stated in the v1.13 specification header/abstract and repository description (specification text and repository link provided).
high positive Agent Control Protocol: Admission Control for Agent Actions governance of autonomous agents in B2B institutional environments
In the long term, big data promotes sustained improvements in individuals’ welfare.
Theoretical long-run growth analysis in the model showing that sustained data sharing leads to long-run welfare improvements (analytic/model-based, no empirical/sample data).
high positive Study on the impact of big data sharing on individuals’ welf... long-term growth of individuals' welfare
There exists an optimal level of data (big data) sharing that achieves the best balance between economic development and privacy, thereby maximizing individuals' welfare.
Analytical optimization within the theoretical macro model: model yields an interior optimum for data-sharing intensity that trades off economic gains and privacy costs (derivation/analytical result; no empirical test).
high positive Study on the impact of big data sharing on individuals’ welf... individuals' welfare maximization via optimal data-sharing level
The Institutional Scaling Law predicts that the next phase transition will be driven not by larger models but by better-orchestrated systems of domain-specific models adapted to specific institutional niches.
Predictive conclusion derived from the Institutional Scaling Law and theoretical analysis in the paper. No empirical validation or sample size reported in the excerpt.
high positive The Institutional Scaling Law: Non-Monotonic Fitness, Capabi... drivers of the next phase transition in AI (orchestration of domain-specific sys...
A Symbiogenetic Scaling correction demonstrates that orchestrated systems of domain-specific models can outperform frontier generalists in their native deployment environments.
Theoretical correction/derivation and comparative analysis within the paper (no empirical sample or quantitative benchmark reported in the excerpt).
high positive The Institutional Scaling Law: Non-Monotonic Fitness, Capabi... performance of orchestrated domain-specific model systems versus frontier genera...
A mixed-methods empirical research agenda is presented, proposing a future PLS-SEM approach to test the mediating role of the cognitive flywheel and the moderating effect of fractal governance on organizational resilience.
Methodological proposal described in the paper (research design and proposed analytic approach); no executed empirical study or sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... organizational_resilience (as mediator/moderator relationships to be tested)
Fractal governance architecture is proposed to mitigate systemic vulnerabilities such as automation bias.
Conceptual proposal of a governance design in the paper; no empirical test or sample provided.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... reduction_in_automation_bias / improvement_in_decision_quality
The cognitive flywheel is the central mechanism of this dynamic capability and can be operationalized (the paper operationalizes the cognitive flywheel).
Theoretical operationalization within the paper (concept definition and proposed operational measures); no empirical measurement or sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... mechanism_operationalization (cognitive_flywheel)
The co-evolutionary dynamic is formalized using coupled non-linear differential equations and time decay integrals.
Mathematical formalization reported in the paper (modeling methods described); no empirical parameter estimation or sample provided.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... existence_of_mathematical_model/formal_framework
Dynamic cognitive advantage arises from the historical, recursive, structural coupling of human semantic intent and machine syntactic processing (a co-evolutionary dynamic).
Conceptual theory introduced and argued in the paper (mechanism-level proposition); formalization provided but no empirical validation.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... competitive_differentiation/innovation_output
Conceptualizing the enterprise as a complex adaptive system operating far from thermodynamic equilibrium provides a more appropriate framing for organizations integrating AI and enables the theory of dynamic cognitive advantage.
Theoretical development and conceptual argumentation within the paper; formal framing rather than empirical test; no sample reported.
high positive Governing Human–AI Co-Evolution: Intelligentization Capabili... competitive_differentiation/innovation_output
Artificial intelligence generates positive spatial spillovers for UCEE (positive effects on neighboring regions).
Spatial Durbin model reported in the abstract indicating positive spillover coefficients for artificial intelligence.
high positive How artificial intelligence and environmental regulation inf... UCEE index (spatial spillover effect of AI)
The Global Malmquist–Luenberger (GML) index and its efficiency change (EC) and technological change (TC) components stay above 1, indicating sustained efficiency gains dominated by technological progress.
GML index and decomposition results reported in the abstract based on the panel data and GML computation.
high positive How artificial intelligence and environmental regulation inf... GML index and its EC and TC components (measures of productivity/efficiency chan...
Nationally, the average UCEE index rises from about 0.3 to above 0.7 over the sample period.
Computed UCEE index results from the Super-SBM model applied to the panel of 30 provinces (2013–2022) as reported in the abstract.
high positive How artificial intelligence and environmental regulation inf... UCEE index (average, national)
Recent advances in large language models, tool-using agents, and financial machine learning are shifting financial automation from isolated prediction tasks to integrated decision systems that can perceive information, reason over objectives, and generate or execute actions.
Literature synthesis and conceptual statement in the paper's introduction describing recent technological advances and their effects on financial automation; no empirical sample size reported.
high positive AI Agents in Financial Markets: Architecture, Applications, ... shift in type of financial automation (from isolated prediction to integrated de...
Given these findings, policymakers should favor 'strategic forbearance'—apply existing laws rather than create new regulations that could stifle innovation and diffusion of AI.
Authors' normative policy recommendation based on their interpretation of the reviewed empirical literature (risk–benefit assessment); this is a prescriptive conclusion rather than an empirical finding, so no sample size applies.
high positive AI, Productivity, and Labor Markets: A Review of the Empiric... regulatory approach to AI governance (strategy of forbearance vs. new regulation...
Generative AI lowers entry costs for startups, facilitating new firm entry and product development.
Cited empirical and descriptive evidence in the literature review indicating reduced development costs and faster product prototyping enabled by AI tools; the brief does not provide a pooled sample size or a single quantitative estimate.
high positive AI, Productivity, and Labor Markets: A Review of the Empiric... barriers to entry / startup costs and rate of new product development
Generative AI significantly boosts productivity in specific tasks like coding, writing, and customer service—often by 15% to 50%.
Synthesis/review of empirical literature through 2025 (multiple empirical studies of task-level impacts, including field and lab studies and observational analyses); the brief reports aggregate reported effect ranges but does not list a single pooled sample size.
high positive AI, Productivity, and Labor Markets: A Review of the Empiric... task-level productivity in coding, writing, and customer service
The study contributes to theory by empirically integrating technological, human, and institutional dimensions within a single architectural framework, moving beyond isolated analyses of digital credit.
Author-stated contribution based on combining measures of algorithmic credit systems, human capability, and institutional design and testing interactions in the same regression models.
high positive Architecting financial well-being in algorithmic credit syst... theoretical contribution / integrative framework
Moderation analysis reveals that higher levels of human capability and stronger institutional design amplify the positive effects of algorithmic credit systems and mitigate their adverse effects (i.e., they strengthen repayment and resilience effects and reduce financial stress).
Reported moderation analyses using interaction terms in the regression models on the 400-user cross-sectional sample; results described as significant moderation by human capability and institutional design.
high positive Architecting financial well-being in algorithmic credit syst... conditional effects on repayment behavior, financial resilience, and financial s...
Algorithmic credit systems are positively associated with financial resilience.
Regression analyses reported show a positive relationship between algorithmic credit system use and measures of financial resilience in the sample of 400 users.
Algorithmic credit systems are positively associated with repayment behavior.
Multiple regression results reported in the study indicate a positive association between use of algorithmic credit systems and repayment behavior based on cross-sectional survey of 400 users.
Measurement reliability and validity were established through Cronbach's alpha and principal component analysis.
Paper states that Cronbach’s alpha and principal component analysis (PCA) were used to establish measurement reliability and validity.
high positive Architecting financial well-being in algorithmic credit syst... measurement reliability/validity
The study used a quantitative, explanatory, cross-sectional design and employed multiple regression and moderation analyses to assess relationships among algorithmic credit systems, human capability, institutional design, and financial-wellbeing outcomes.
Methods described explicitly: quantitative explanatory cross-sectional design; analytical methods named as multiple regression and moderation analyses.
high positive Architecting financial well-being in algorithmic credit syst... research design / analytic methods
Data were collected from 400 users of algorithmic and digitally mediated credit platforms.
Study reports a quantitative, explanatory, cross-sectional survey of users; sample size explicitly stated as 400.
high positive Architecting financial well-being in algorithmic credit syst... sample_size / data source
Institutional design (enforceable rules, auditable logs, human oversight on high-impact actions) is a precondition for safe delegation of real authority to LLM agents; systems should be stress-tested under governance-like constraints before assignment of real authority.
Policy recommendation derived from simulation findings that governance structure strongly influences corruption-related outcomes and that safeguards alone are not consistently sufficient; grounded in experiments and rubric-assessed outcomes across 28,112 transcript segments.
high positive I Can't Believe It's Corrupt: Evaluating Corruption in Multi... safety of delegation to LLM agents (compliance with rules, avoidance of abuse)
Among models operating below saturation, governance structure is a stronger driver of corruption-related outcomes than model identity.
Comparative analysis within the multi-agent governance simulations across different authority structures and model identities; outcomes aggregated and compared across regimes (based on the 28,112 transcript segments scored).
high positive I Can't Believe It's Corrupt: Evaluating Corruption in Multi... corruption-related outcomes / rule-breaking
Integrity in institutional AI should be treated as a pre-deployment requirement rather than a post-deployment assumption.
Argument and recommendation based on results from multi-agent governance simulations evaluating rule-breaking and abuse; conclusions drawn from aggregate outcomes across simulated regimes and interventions (see study of 28,112 transcript segments).
high positive I Can't Believe It's Corrupt: Evaluating Corruption in Multi... institutional integrity / safety of delegation to LLM agents
LLM-generated peer reviews assign scores that, on average, are a full point higher than human reviews.
Analysis of scores in the conference peer review dataset comparing LLM-generated vs human reviews; the excerpt states an average increase of one full point but does not include sample size or scale range.
high positive How LLMs Distort Our Written Language assigned review scores
About 21% of scientific peer reviews at a recent top AI conference were AI-generated (LLM-generated) in the wild.
Analysis of peer reviews from a recent top AI conference reported in the paper; the excerpt reports the 21% figure but does not give total number of reviews in the excerpt.
high positive How LLMs Distort Our Written Language share/proportion of peer reviews that were AI-generated
Even when LLMs are prompted with expert feedback and asked to only make grammar edits, they still change the text in a way that significantly alters its semantic meaning.
Experiment in which LLMs were given expert feedback and explicit instructions to perform only grammar edits; comparisons show significant semantic alteration despite constrained instructions; sample size not provided.
high positive How LLMs Distort Our Written Language semantic alteration of text despite constrained grammar-only prompt
Using a dataset of human-written essays (collected in 2021 before widespread LLM release), asking an LLM to revise essays based on human-written feedback induces large changes in the resulting content and meaning.
Controlled experiments applying LLM revision to a pre-LLM essay dataset and comparing pre- and post-revision content/semantics; dataset described as collected in 2021 but sample size not stated in the excerpt.
high positive How LLMs Distort Our Written Language magnitude of content and semantic changes after LLM revision
In a human user study, extensive LLM use led to a nearly 70% increase in essays that remained neutral in answering the topic question.
Human user study reported in the paper; the excerpt gives the quantified result (nearly 70% increase) but does not report sample size here.
high positive How LLMs Distort Our Written Language proportion of essays judged as neutral in answering the topic question
LLMs consistently alter the intended meaning of human writing.
Experiments in which human-written essays were revised by LLMs (including prompts asking only for grammar edits) and comparison of pre- and post-LLM text semantics; exact sample sizes not stated in the excerpt.
high positive How LLMs Distort Our Written Language degree of semantic change / alteration of intended meaning
LLMs alter the voice and tone of human writing.
Reported results from a human user study and subsequent experiments comparing original human-written text to LLM-assisted/LLM-revised text; sample sizes not provided in the excerpt.
high positive How LLMs Distort Our Written Language change in voice and tone of writing
Large language models (LLMs) are used by over a billion people globally, most often to assist with writing.
Statement in paper (likely based on external usage statistics or surveys cited by authors); no sample size reported in the provided text.
high positive How LLMs Distort Our Written Language LLM adoption and primary use case (writing assistance)
The code and data used in the study are publicly available at the referenced repository.
Paper statement that code and data are publicly available at a repository (link provided in paper).
high positive Unmasking Algorithmic Bias in Predictive Policing: A GAN-Bas... availability of replication materials (code and data)
A sensitivity analysis over patrol radius, officer count, and citizen reporting probability reveals outcomes are most sensitive to officer deployment levels.
Reported sensitivity analysis across patrol radius, officer count, and reporting probability showing officer count as the most influential parameter in the simulation outcomes.
high positive Unmasking Algorithmic Bias in Predictive Policing: A GAN-Bas... sensitivity of bias/detection outcomes to simulation parameters (patrol radius, ...