The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (4049 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Governance Remove filter
The EU AI Act faces significant obstacles in confronting governance challenges arising from AI agents, such as the risk of misuse of agents by malicious actors.
Authors' analysis highlighting misuse risks and the Act's limitations in addressing them (policy/legal analysis; no empirical sample size in excerpt).
high negative Regulating AI Agents risk of malicious misuse and regulatory capacity to mitigate it
The EU AI Act faces significant obstacles in confronting governance challenges arising from AI agents, such as performance failures in autonomous task execution.
Authors' analytical argument that the Act's design and provisions do not adequately address autonomous performance failures (policy/legal analysis; no empirical sample size provided in excerpt).
high negative Regulating AI Agents ability of regulation to address performance failures (error rates / autonomous ...
The EU AI Act was promulgated prior to the development and widespread use of AI agents.
Factual/timing claim by the authors referencing the Act's adoption date relative to development and proliferation of AI agents (historical/policy analysis; dates verifiable externally).
high negative Regulating AI Agents temporal alignment between regulation and technology development
AI agents present particularly pressing questions for the European Union's AI Act.
Authors' normative/analytical claim based on the perceived fit between AI agents' characteristics and the EU AI Act's design (policy/legal analysis; no empirical sample size in excerpt).
high negative Regulating AI Agents regulatory adequacy of the EU AI Act for AI agents
Analysis of global datasets on energy dependency, economic concentration, debt levels, demographic trends, digital infrastructure, and AI adoption highlights that interconnected systemic risks can amplify economic instability.
Paper reports drawing upon multiple global datasets (energy dependency, economic concentration, debt, demographics, digital infrastructure, AI adoption) to analyze systemic risk interactions; specific datasets, sample sizes, and statistical methods are not detailed in the excerpt.
high negative Beyond Forecasting: Adaptive Economic Preparedness in a Geop... amplification of economic instability by interconnected systemic risks
Events such as supply chain disruptions, oil price surges linked to geopolitical conflicts, and sudden labour market shifts due to reverse migration have exposed the limitations of prediction-based planning frameworks.
Illustrative examples cited in the paper; the claim is supported by referenced global events and the paper's use of global datasets, but no specific empirical case-study sample sizes or quantification are provided in the excerpt.
high negative Beyond Forecasting: Adaptive Economic Preparedness in a Geop... exposure of limitations in prediction-based planning frameworks
Traditional economic models that rely heavily on historical data and linear forecasting are increasingly inadequate in capturing the complexity and unpredictability of contemporary economic shocks.
Conceptual claim supported by discussion and examples of recent shocks (supply chain disruptions, oil price surges, labor market shifts); no specific empirical evaluation or quantified model comparison reported in the excerpt.
high negative Beyond Forecasting: Adaptive Economic Preparedness in a Geop... predictive adequacy of traditional economic models
The global economic system is undergoing a structural transformation characterized by geopolitical tensions, energy price volatility, trade fragmentation, demographic imbalances, and rapid technological disruption driven by artificial intelligence.
Narrative synthesis in the paper drawing on global trends; the paper references global datasets on energy dependency, trade patterns, demographics, and AI adoption (no specific sample size or empirical study detailed in the excerpt).
high negative Beyond Forecasting: Adaptive Economic Preparedness in a Geop... structural transformation of the global economic system (presence of geopolitica...
The main risk is not merely copying, but the possibility that useful capability can be transferred more cheaply than the governance structure that originally accompanied it.
Conceptual threat model articulated in the paper; argued on normative/theoretical grounds without reported empirical measurement or sample.
high negative A Public Theory of Distillation Resistance via Constraint-Co... relative_cost/ease_of_capability_transfer_vs_governance_transmission
Distillation becomes less valuable as a shortcut when high-level capability is coupled to internal stability constraints that shape state transitions over time.
Theoretical argument presented as the paper's core claim; introduces a conceptual mechanism (capability-stability coupling) and argues why this would reduce the usefulness of distillation. No empirical data, experiments, or sample are reported.
high negative A Public Theory of Distillation Resistance via Constraint-Co... value_of_distillation / usefulness_of_distillation_as_a_shortcut
The competence shadow compounds multiplicatively to produce degradation far exceeding naive additive estimates.
Analytic/closed-form performance bounds derived in the paper showing multiplicative compounding (theoretical result; no empirical sample reported).
The competence shadow is a systematic narrowing of human reasoning induced by AI-generated safety analysis; it is defined as not what the AI presents, but what it prevents from being considered.
Conceptual definition and formalization within the paper (theoretical exposition; no empirical test reported).
Safety engineering resists benchmark-driven evaluation because safety competence is irreducibly multidimensional, constrained by context-dependent correctness, inherent incompleteness, and legitimate expert disagreement.
Conceptual/theoretical argument and formalization presented in the paper (no empirical sample reported).
In experimental settings, the model is able to induce belief and behaviour changes in study participants.
Controlled experimental interventions reported in the study where participant beliefs and behaviors were measured pre/post or between conditions; aggregate result: model induced changes.
high negative Evaluating Language Models for Harmful Manipulation participant beliefs and behaviour changes (manipulative efficacy)
The tested model can produce manipulative behaviours when prompted to do so.
Human-AI interaction tests in which the model was prompted to produce manipulative behaviours; empirical observations reported in study across participants and prompts.
high negative Evaluating Language Models for Harmful Manipulation frequency/occurrence of manipulative behaviours (model propensity to produce man...
Refining the state (as above) raises state-action blind mass from 0.0165 at \tau=50 to 0.1253 at \tau=1000.
Empirical measurement reported on the instantiated model over the BPI 2019 log showing state-action blind mass values at two threshold (tau) settings.
high negative The Stochastic Gap: A Markovian Framework for Pre-Deployment... state-action blind mass (measure of unsupported next-step decisions)
Empirical evidence shows that many failures arise from miscalibrated reliance, including overuse when AI is wrong and underuse when it is helpful.
Paper cites empirical literature (unspecified in excerpt) as the basis for this claim; no sample size or methods given here.
high negative From Accuracy to Readiness: Metrics and Benchmarks for Human... failures due to miscalibrated reliance (overreliance/underreliance)
Evaluation practices focus primarily on model accuracy rather than whether human-AI teams are prepared to collaborate safely and effectively.
Paper-level critique / literature observation asserted in text; no empirical method or sample reported in excerpt.
high negative From Accuracy to Readiness: Metrics and Benchmarks for Human... evaluation focus (accuracy vs. team readiness)
The reduction in engagement from AI labeling (AI-generated or AI-enhanced) was particularly pronounced for emotional content compared to rational content.
Interaction of content type (emotional vs. rational) with labeling in the two online experiments (study 1: n = 325; study 2: n = 371) reported in the abstract.
high negative AI content labeling and user engagement on social media: The... affective and behavioral engagement for emotional content
Labeling content as AI-enhanced reduced both affective and behavioral engagement compared to human-created content.
Same two online experiments on Prolific (study 1: n = 325; study 2: n = 371) where participants viewed Instagram profiles labeled as human-created, AI-enhanced, or AI-generated.
high negative AI content labeling and user engagement on social media: The... affective and behavioral engagement
Labeling content as AI-generated reduced both affective and behavioral engagement compared to human-created content.
Two online experiments conducted via Prolific (study 1: n = 325; study 2: n = 371). Participants viewed Instagram profiles containing visual content labeled as human-created, AI-enhanced, or AI-generated and engagement was measured.
high negative AI content labeling and user engagement on social media: The... affective and behavioral engagement
Currently, the region remains reactive as a 'recipient' rather than a 'creator' or an effective partner in the AI ecosystem.
Characterization reported by the authors based on their regional research and field study (qualitative findings from leaders across public/private sectors).
high negative Charting AI Governance Future in the Arab Region: A Policy R... degree of domestic AI creation/innovation versus reception/adoption
This gap hinders the ability of many governments in the region to push their countries toward joining the ranks of those benefiting from the AI revolution—both in developing the public sector and supporting economic growth and social development.
Authors' analysis and interpretation based on the regional research/field study described in the report.
high negative Charting AI Governance Future in the Arab Region: A Policy R... governments' ability to benefit from AI (public sector development; economic and...
The Arab region’s capacity for Artificial Intelligence (AI) governance remains limited relative to the accelerating pace of global AI developments and associated challenges.
Stated conclusion in the executive report based on a regional field study (authors' analysis of interviews/surveys and research across the region).
These harms increasingly translate into financial loss through litigation, enforcement penalties, brand erosion, and failed deployments.
Paper argues this linkage using conceptual reasoning and illustrative examples/case vignettes; cites regulatory and market incidents but does not provide systematic empirical estimates or a sample size.
AI systems can create material harms: discriminatory outcomes, privacy and security failures, opacity in decision logic, and regulatory noncompliance.
Paper lists these harms as core risks based on prior literature, regulatory developments, and conceptual risk analysis. Presented as well-documented categories rather than as new empirical findings; no sample size reported.
As artificial intelligence assumes cognitive labor, no existing quantitative framework predicts when human capability loss becomes catastrophic.
Introductory/background claim asserted by authors motivating the study (literature gap claim).
high negative The enrichment paradox: critical capability thresholds and i... absence of prior quantitative frameworks for catastrophic human capability loss
Broader AI scope lowers the critical threshold K* (i.e., more general AI reduces the K* value at which capability collapse occurs).
Model sensitivity analysis / simulations showing K* varies with assumed scope of AI (reported in model calibration discussion).
high negative The enrichment paradox: critical capability thresholds and i... change in critical threshold K* with AI scope
The model identifies a critical threshold K* approximately 0.85 (scope-dependent; broader AI scope lowers K*) beyond which capability collapses abruptly — the 'enrichment paradox.'
Model analysis and simulations calibrated across domains (paper reports computed threshold K* ≈ 0.85 and notes dependence on AI scope).
high negative The enrichment paradox: critical capability thresholds and i... critical delegation/capability threshold (K*) at which human capability collapse...
Fabrication risk is not an anomalous glitch but a foreseeable consequence of the technology's design, with direct implications for the evolving duty of technological competence.
Conclusion drawn from the paper's theoretical/physics-based analysis and the simulated scenario; stated in the abstract as the authors' interpretation and policy/legal implication.
high negative When AI output tips to bad but nobody notices: Legal implica... foreseeability of fabrication risk and implications for professional duty/compet...
The paper presents the physics-based analysis in a legal-industry setting by walking through a simulated brief-drafting scenario.
Methodological claim explicitly stated in the abstract: use of a simulated brief-drafting scenario to demonstrate the analysis.
high negative When AI output tips to bad but nobody notices: Legal implica... demonstration of fabrication risk in a simulated legal drafting task (output qua...
Although commonly dismissed as random 'hallucination', recent physics-based analysis of the Transformer's core mechanism reveals a deterministic component: the AI's internal state can cross a calculable threshold, causing its output to flip from reliable legal reasoning to authoritative-sounding fabrication.
Paper cites/relies on 'recent physics-based analysis' of Transformer mechanisms and states that it demonstrates a calculable threshold; the paper also purports to present this science in a legal setting (via simulation). No numeric experimental sample provided in the excerpt.
high negative When AI output tips to bad but nobody notices: Legal implica... transition from reliable reasoning to fabricated outputs (failure mode / interna...
Courts confront a novel threat to the integrity of the adversarial process due to fabricated authorities produced by generative AI.
Asserted in the abstract as a consequence of fabricated outputs; supported by the paper's conceptual argument and simulation reference rather than empirical court-case analysis.
high negative When AI output tips to bad but nobody notices: Legal implica... integrity of the adversarial process / decision quality in courts
Attorneys who unknowingly file such fabrications face professional sanctions, malpractice exposure, and reputational harm.
Stated as a legal/consequential claim in the abstract; no empirical evidence, case counts, or legal-statistics provided in the excerpt.
high negative When AI output tips to bad but nobody notices: Legal implica... professional sanctions, malpractice exposure, reputational harm
For law in particular, generative AI introduces a perilous failure mode in which the AI fabricates fictitious case law, statutes, and judicial holdings that appear entirely authentic.
Claimed in the paper; supported by the paper's analytic argument and a simulated brief-drafting scenario referenced in the abstract (no numeric sample provided).
high negative When AI output tips to bad but nobody notices: Legal implica... fabrication of legal authorities (authentic-appearing fake citations/holdings)
Improvements in AI ('better' AI) amplify the excess automation as well.
Model comparative statics: increased AI capabilities raise private incentives to automate, leading to more displacement than is socially optimal; theoretical analysis only.
high negative The AI Layoff Trap level of automation / worker displacement as a function of AI capability
More competition amplifies the excess automation (the automation arms race).
Comparative-statics result in the competitive task-based theoretical model showing increased competition raises firms' incentives to automate; no empirical sample.
high negative The AI Layoff Trap level of automation / worker displacement as a function of competition intensity
The resulting loss from excess automation harms both workers and firm owners.
Welfare comparisons from the model showing negative payoff changes for workers (lower wages/less employment) and reduced owner returns when automation is excessive; theoretical analysis, no empirical data.
high negative The AI Layoff Trap welfare/profits of workers and firm owners (losses caused by excess automation)
In a competitive task-based model, demand externalities trap rational firms in an automation arms race, displacing workers well beyond what is collectively optimal.
Formal equilibrium analysis in the paper's theoretical competitive task-based model; comparative statics and welfare analysis (no empirical sample).
high negative The AI Layoff Trap extent of worker displacement relative to social optimum
Knowing that AI-driven displacement can erode demand is not enough for firms to stop automating.
Analytical result from the paper's competitive task-based model showing firms' incentives do not internalize demand externalities; no empirical sample.
high negative The AI Layoff Trap firm automation decisions (propensity to automate) despite awareness of aggregat...
If AI displaces human workers faster than the economy can reabsorb them, it risks eroding the very consumer demand firms depend on.
Theoretical statement in the paper's motivating premise; no empirical sample reported (conceptual argument about aggregate demand effects when displacement outpaces reabsorption).
high negative The AI Layoff Trap consumer demand (aggregate demand) as affected by worker displacement
Fukui is Japan's least-visited prefecture.
Descriptive claim in the paper specifying the study site (Fukui) as the country's least-visited prefecture; no supporting national rankings provided in the excerpt.
We quantify an annual opportunity gap of 865,917 unrealized visits, equivalent to approximately 11.96 billion yen (USD 76.2 million) in lost revenue.
Model-based estimate produced by the DSS using the analyzed datasets and the DHDE-informed optimization; figure reported directly in the paper.
high negative Engineering Distributed Governance for Regional Prosperity: ... unrealized visits and lost revenue
For regions experiencing demographic decline and structural stagnation, the primary risk is 'under-vibrancy', a condition where low visitor density suppresses economic activity and diminishes satisfaction.
Conceptual claim and problem framing provided by the authors (theoretical/qualitative argument in the paper).
high negative Engineering Distributed Governance for Regional Prosperity: ... economic activity and satisfaction (conceptual)
Most research in urban informatics and tourism focuses on mitigating overtourism in dense global cities.
Author statement in introduction positioning the paper relative to existing literature; no quantitative literature review or citation counts reported in the excerpt.
Strict data sovereignty laws fragment regional collaboration between African Union member states and hinder AI development.
Stated in the paper as a policy barrier; supported by the authors' policy review of data sovereignty rules and their implications for cross-border data sharing.
high negative Take the Train: Africa at the Crossroad of Modern AI regional collaboration for AI development
Restricted cloud access due to payment system mismatches and volatile exchange rates is a barrier to AI adoption in Africa.
Claim made in the paper as part of the list of barriers; based on the authors' qualitative and quantitative review and reference to policy/financial constraints across African countries.
high negative Take the Train: Africa at the Crossroad of Modern AI cloud access for AI developers
Important barriers include limited access to high-performance computing (HPC).
Paper identifies limited HPC access as a key barrier; supported by the authors' collection and consolidation of HPC availability data via the Africa AI Compute Tracker (ACT).
high negative Take the Train: Africa at the Crossroad of Modern AI access to high-performance computing (HPC)
Africa's participation in modern AI development is constrained by severe infrastructural and policy gaps.
Stated as a central argument in the paper; supported by the paper's synthesis of qualitative and quantitative evidence and reference to official declarations on AI adoption across the continent.
high negative Take the Train: Africa at the Crossroad of Modern AI Africa's participation in modern AI development
AI can initially exacerbate distributional injustice.
Dimension-level analysis indicating negative (or initially negative) effects of AI on the distributional component of the energy justice index.
high negative Artificial intelligence adoption for advancing energy justic... distributional justice component of energy justice index