The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5877 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Governance Remove filter
Artificial intelligence (AI) has a positive and statistically significant effect on growth at lower conditional quantiles (τ = 0.10–0.25) but is insignificant at higher quantiles.
MMQR estimation results reported in the paper showing significant positive AI coefficients at τ = 0.10–0.25 and insignificant coefficients at higher quantiles.
high mixed Towards Smart, Economic Performance and Sustainable Monetary... GDP growth (conditional quantiles of growth)
Institutional factors (education systems, active labor market policies, mobility, industrial policy, social protection) shape net employment outcomes from AI.
Theoretical and policy-focused synthesis; cross-country comparisons in literature highlight institutional mediation though no single new cross-country empirical estimate is provided.
high mixed Artificial Intelligence, Automation, and Employment Dynamics... variation in employment outcomes and distributional impacts across countries wit...
Net employment effects depend on the balance of substitution and complementarity, sectoral exposure, and institutional responses.
Conceptual labor-economics framework (task-based, skill-biased change) and comparative review of cross-country/sectoral evidence emphasizing institutional mediation.
high mixed Artificial Intelligence, Automation, and Employment Dynamics... net employment change (by sector/country) and distributional outcomes
AI will substantially restructure labor markets.
Task-based theoretical approach and cross-sectoral synthesis of empirical studies showing task substitution and complementarity effects across occupations and sectors.
high mixed Artificial Intelligence, Automation, and Employment Dynamics... occupational composition, sectoral employment shares, task mix
Scholarly production, institutional incentives, funding, and the Cold War geopolitical context shaped which economic theories became prominent.
Historical institutional case study drawing on archives, correspondence, publication records, and contemporaneous debates to link institutional and funding environments to intellectual trajectories.
high mixed Ideological competition during the era of the 20th century c... prominence of economic theories (qualitative assessment tied to institutional/fu...
Long-run integration (degree of long-run association) between core AI and AI-enhanced robotics differs systematically across national innovation systems.
Country-level decomposition of patent filing series and time-series econometric tests for long-run relationships / cointegration between core AI and AI-enhanced robotics patent series for each country/region (China, U.S., Europe, Japan, South Korea).
high mixed The "Gold Rush" in AI and Robotics Patenting Activity. Do in... measures of long-run association/cointegration between core AI and AI-enhanced r...
Core AI, traditional robotics, and AI-enhanced robotics follow distinct historical trajectories over 1980–2019 and do not move together uniformly.
Time-series analysis using annual patent filing counts (1980–2019) for each domain; tests for common long-run relationships / co-movement across the three patent series (as reported in the paper). Country-aggregated and domain-specific patent time series were analyzed; exact sample size (total patents) not specified in the summary.
high mixed The "Gold Rush" in AI and Robotics Patenting Activity. Do in... annual patent filing counts/time-series trajectories for each of the three domai...
Kondratieff, Schumpeter, and Mandel each highlight different drivers of capitalist long waves: Kondratieff emphasizes regular technological-driven renewal, Schumpeter emphasizes entrepreneurship and innovation-led creative destruction, and Mandel emphasizes class relations and production structures.
Comparative theoretical analysis and literature synthesis across the three schools; conceptual summary of canonical positions (no original dataset; qualitative interpretation).
high mixed Economic Waves, Crises and Profitability Dynamics of Enterpr... theoretical drivers of capitalist cycles
XChronos reframes transhumanist technology evaluation in experiential terms, creating both market opportunities and measurement/regulatory challenges for AI economics.
Synthesis and concluding argument in the paper summarizing proposed implications; conceptual reasoning without empirical tests.
high mixed XChronos and Conscious Transhumanism: A Philosophical Framew... shift in evaluation criteria toward experiential measures and resultant market/r...
Across 182 reviewed studies, LLM-generated synthetic participants have modest and inconsistent fidelity to human participants.
Systematic review and synthesis of 182 empirical and methodological studies comparing LLM-generated participants to human samples; studies were coded and analyzed for fidelity outcomes.
high mixed Synthetic Participants Generated by Large Language Models: A... fidelity of synthetic participants to human participants (behavioral/response si...
Human factors (training, trust calibration, workflows) determine whether clinicians accept, override, or ignore GenAI suggestions.
Qualitative and quantitative human-AI interaction studies and pilot deployments discussed in the paper; specific sample sizes and effect sizes are not reported in the paper.
high mixed GenAI and clinical decision making in general practice override/acceptance rates; clinician-reported trust and cognitive load; adherenc...
Safety and net benefit of GenAI CDS hinge on deployment details: user interface, real-time feedback, uncertainty quantification, calibration, and how recommendations are presented (strong vs. suggestive).
Human factors and implementation studies referenced; early A/B tests and human-AI interaction research suggest interface and presentation affect acceptance and error rates; no large-scale standardized implementation trial data cited.
high mixed GenAI and clinical decision making in general practice acceptance/override rates; error rates; calibration metrics; clinician trust
Reimbursement models (fee-for-service vs. capitation) will influence whether cost savings from GenAI are realized or offset by increased service volume.
Economic incentive framework and prior health-economics literature cited; the paper does not provide direct empirical tests but references plausible incentive channels.
high mixed GenAI and clinical decision making in general practice total spending; per-patient cost; service volume under different payment models
RL and adaptive methods are good for real-time adaptation but can be myopic, require large amounts of interaction data, and struggle to incorporate long-term preference structure and ethical constraints.
Surveyed properties of reinforcement learning and adaptive methods in HRI/RS literature; no new empirical evaluation in this paper.
high mixed Reimagining Social Robots as Recommender Systems: Foundation... real-time adaptation effectiveness, sample efficiency (amount of interaction dat...
Key tradeoffs in contemporary financing models include speed/flexibility versus regulatory coverage and long‑term cost, and data reliance versus privacy/fairness.
Multi‑criteria comparative evaluation and conceptual analysis across financing models; synthesis draws on regulatory context and observed product features rather than primary quantitative tradeoff estimation.
high mixed Traditional vs. contemporary financing models for MSMEs and ... tradeoff between speed/flexibility and regulatory protection/cost; tradeoff betw...
Performance of structure prediction models scales with data, model size, and compute; there are tradeoffs between accuracy and inference speed/simplicity.
Paper explicitly states scaling behavior and tradeoffs in 'Compute and training' and 'Representative models' sections; no precise scaling curves or thresholds are provided in the text.
high mixed Protein structure prediction powered by artificial intellige... model predictive performance as a function of training data volume, model size, ...
The United States' decentralized education system produces tensions between local innovation and federal accountability, with active debates over data and privacy laws shaping responses to AI in assessment.
Case study of U.S. policy and secondary literature documenting federal-state-local governance dynamics and ongoing legal/policy debates; descriptive evidence from public documents.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... policy tension between innovation and accountability; data/privacy regulation ac...
China's centralized control enables rapid piloting of AI-supported assessment but raises concerns over surveillance and data governance.
Country case study using Chinese policy texts and secondary analyses describing centralized education governance and data-governance practices; illustrative rather than empirical.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... speed of piloting AI assessment and surveillance/data-governance risk
India faces pressure to maintain high-stakes exams amid uneven digital access and is experimenting with blended formative tools.
Country-specific case study based on policy documents and secondary literature describing India's exam system and early technology initiatives; no primary survey/sample size.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... policy stance on high-stakes exams and digital access disparities
Four national case studies (India, China, the United States, Canada) illustrate diverse national responses to AI in assessment shaped by governance structures, resource constraints, cultural attitudes, and political pressures.
Cross-national comparative analysis using publicly available policy texts, recent reforms, and secondary literature for each country; descriptive, illustrative cases rather than exhaustive or representative samples.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... national policy responses and governance approaches
Important tradeoffs exist (privacy vs. utility; centralized vs. federated data architectures; automated moderation vs. freedom of expression; cost/complexity of secure hardware) that must be balanced in VR security design.
Comparative evaluation across the reviewed corpus (31 studies) identifying recurring ethical and technical tradeoffs; authors discuss these qualitatively.
high mixed Securing Virtual Reality: Threat Models, Vulnerabilities, an... direction and magnitude of tradeoffs between privacy, utility, governance, and c...
Across the EU, Algeria, and Pakistan there is convergent recognition of dual‑use risks, increasing use of export controls, and interest in developing domestic AI capacity.
Cross‑jurisdictional synthesis of national/supranational legal texts, export‑control policies, and policy documents showing discussion of dual‑use issues and capacity building.
high mixed <b>Regulating AI in National Security: A Comparative S... presence of policy recognition and instruments addressing dual‑use risks, export...
The community knowledge functions both as practical how-to guidance and as collective experimentation with platform rules and revenue mechanisms.
Observed dual nature in the 377-video corpus: instructional workflows alongside demonstrations/testing of platform-tailored monetization tactics and workarounds.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... co-occurrence of instructional content and platform-experimentation practices
Typical practices emphasized by creators include rapid mass production of content, productizing prompt engineering, repurposing existing material via synthesis/localization, and packaging AI outputs as sellable creative services or assets.
Recurring practices surfaced through qualitative coding of workflows, tools, and pipelines described in the 377 videos.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... presence and frequency of recommended production and productization practices
Across the 377 videos, creators converge on a set of repeatable use cases and platform‑tailored monetization tactics.
Thematic coding of 377 videos produced a catalog of recurring use cases and tactics; the paper reports convergence across that sample.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... frequency and recurrence of specific use cases and monetization tactics in the s...
YouTube creators have collectively constructed and circulated a practical knowledge repository about how to monetize GenAI-driven creative work.
Systematic qualitative content analysis (thematic coding) of 377 publicly available YouTube videos in which creators promote GenAI workflows and monetization strategies.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... presence and characteristics of a community knowledge repository (practical guid...
Citation counts across repeated samples follow a power-law (heavy-tailed) distribution: a few domains are cited often while many domains are cited rarely.
Empirical distributional analysis of citation counts from repeated samples collected across the three platforms and three topics (multi-day and high-frequency regimes); observed heavy-tailed / power-law fit to citation-count distribution.
high mixed Quantifying Uncertainty in AI Visibility: A Statistical Fram... distribution of citation counts per domain (frequency of domain citations)
Emotional redirection is common: 33% of fear-tagged posts receive joy-tagged responses.
Post–response emotion transition analysis using the emotion-labeled dataset; calculation of conditional probability that responses to fear-tagged posts are labeled joy (observed rate ≈33%) in Moltbook threads.
high mixed What Do AI Agents Talk About? Emergent Communication Structu... proportion of responses to fear-tagged posts that are joy-tagged (emotion transi...
Self-reflective discussion was concentrated in Science & Technology and Arts & Entertainment topical categories, while Economy & Finance threads showed no self-referential content.
Topic modeling and manual/automatic tagging of self-referential themes across identified topical categories within the Moltbook dataset; category-level counts showing presence/absence of self-referential tags (dataset: 361,605 posts).
high mixed What Do AI Agents Talk About? Emergent Communication Structu... presence and concentration (%) of self-referential content by topical category
The topology of service-dependency graphs (modelled as DAGs of compute stages) is a first-order determinant of whether decentralised, price-based resource allocation will be stable and scalable.
Systematic ablation study using simulation: 1,620 runs total across six experiment types, sweeping graph topology (hierarchical vs cross-cutting), load, hybrid integrator presence, and governance constraints; metrics included price convergence/volatility and allocation throughput/quality. Effect sizes reported in the paper show topology had the largest impact on price stability and scalability.
high mixed Real-Time AI Service Economy: A Framework for Agentic Comput... price convergence / price volatility and system scalability (throughput and allo...
Absence of irreducibility, positive recurrence, or aperiodicity in the state dynamics can produce non-ergodic reward behavior.
Theoretical argument and examples in the paper illustrating how breakdowns of these chain conditions lead to multiple invariant measures or absorbing regimes; analysis-based evidence.
high mixed Ergodicity in reinforcement learning presence of non-ergodic long-run reward behavior (e.g., multiple invariant measu...
Standard Markov chain ergodicity conditions (irreducibility, positive recurrence, aperiodicity) imply ergodic reward processes when rewards depend only on the chain state.
Formal mapping in the paper between Markov-chain ergodicity properties and reward-process ergodicity; theoretical derivation (no empirical sample).
high mixed Ergodicity in reinforcement learning ergodicity of reward process (equivalence to chain ergodicity when rewards are s...
Non-ergodic processes admit path-dependent long-run behavior (e.g., absorbing sets, multiple invariant measures, path-dependent reinforcement), so different runs with the same policy can have different long-run averages.
Analytic discussion of Markov-chain examples and theory plus the paper's illustrative constructed example showing path-dependent locking into regimes; theoretical and example-driven evidence.
high mixed Ergodicity in reinforcement learning variance across realized long-run average rewards across trajectories under the ...
Ergodic reward processes are those where time averages along almost every long trajectory converge to the same value as the ensemble average.
Formal definition and discussion in the paper mapping ergodicity concepts from stochastic processes to reward processes; theoretical exposition.
high mixed Ergodicity in reinforcement learning convergence of time-average reward to ensemble average
The model explicitly separates competition into two stages: discovery (first-passage to resource patches) and monopolization (local takeover and stabilization).
Model specification in the paper: stochastic, spatially-structured population model with distinct discovery and monopolization dynamics; this is a modeling assumption/structure rather than empirical measurement.
high mixed Macroscopic Dominance from Microscopic Extremes: Symmetry Br... conceptual/structural decomposition of competitive dynamics into 'discovery' and...
Two qualitatively distinct mechanisms underlie observed dominance: (1) extreme-event-mediated lucky discovery (transient), and (2) mechanistic asymmetries (non-reciprocal biases) that convert lucky discovery into permanent dominance.
Conceptual separation in the model structure (discovery vs monopolization phases), analytic results on first-passage extreme events, and absorbing-state analysis showing necessity of asymmetry for permanence; supported by simulations demonstrating the two-stage behavior. The claim is theoretical.
high mixed Macroscopic Dominance from Microscopic Extremes: Symmetry Br... mechanism producing dominance (transient early advantage vs permanence via asymm...
RAD requires estimating cost distributions and choosing a reference policy and quantile-weighting function; these choices determine the method's conservatism and sample efficiency.
Methodological and practical considerations discussed in the paper; noted dependency on estimation and design choices (no quantitative sample-efficiency results provided in the summary).
high mixed Safe RLHF Beyond Expectation: Stochastic Dominance for Unive... method conservatism (relative safety level) and sample efficiency (amount of dat...
Explanations change workflows, shift responsibilities between humans and machines, and can reshape power dynamics—creating both opportunities (better oversight) and risks (over-reliance, gaming).
Qualitative and conceptual studies synthesized in the review, including socio-technical analyses and case studies reporting observed or theorized workflow and responsibility shifts; no meta-analytic causal estimate.
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... workflows, responsibility allocation, power dynamics, oversight quality
Explanations increase user trust principally when they are understandable, actionable, and aligned with users’ domain knowledge; opaque or overly technical explanations can fail to build trust or even decrease it.
Thematic synthesis of empirical and conceptual studies in the reviewed literature reporting conditional effects of explanation form and comprehensibility on trust; review notes heterogeneity in study designs and contexts.
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... user trust / changes in trust toward AI outputs
Explainability improves perceived legitimacy, user trust, and organizational accountability only when technical transparency is paired with human-centered explanation design and governance mechanisms.
Synthesis of studies from the reviewed literature showing conditional effects of algorithmic interpretability combined with explanation design and governance; derived via thematic coding across technical and social-science sources (no new primary experimental data reported).
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... perceived legitimacy, user trust, organizational accountability
Explainability is a necessary but not sufficient condition for trustworthy AI in high-stakes domains.
Systematic literature review (thematic coding and synthesis) of interdisciplinary scholarship (peer-reviewed research, technical reports, policy documents); the paper synthesizes conceptual and empirical studies rather than presenting new primary data. Emphasis on high-stakes domains (healthcare, finance, public sector).
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... overall trustworthiness of AI systems in high-stakes domains (multidimensional c...
Some patients value human contact for sensitive cases; automated interactions can feel impersonal.
Semi-structured interviews with patients/staff and open-ended survey responses documenting preferences for human interaction in sensitive/complex complaints.
high mixed The Role of Artificial Intelligence in Healthcare Complaint ... patient-reported preference for human contact and perceived interpersonal qualit...
Data‑driven policies can either amplify or mitigate inequalities depending on data representativeness, model design, and deployment governance.
Multiple empirical examples and theoretical analyses in the review highlighting cases of both harm (bias amplification) and mitigation, identified across the 103 items.
high mixed Models, applications, and limitations of the responsible ado... distributional equity outcomes (inequality amplification or mitigation)
Citizen acceptance, transparency, and perceived fairness strongly shape adoption trajectories and the political feasibility of AI tools in government.
Repeated empirical findings in the reviewed literature linking public trust, transparency measures, and fairness perceptions to successful or failed deployments (drawn from multiple case studies in the 103 items).
high mixed Models, applications, and limitations of the responsible ado... adoption trajectory/political feasibility of government AI tools (measured via d...
Adoption of AI and data-driven governance is highly uneven across jurisdictions and sectors, driven by institutional capacity, governance frameworks, and public trust.
Cross‑regional and cross‑sector comparisons in the review corpus (103 items) showing varying maturity levels and repeated identification of institutional capacity, governance arrangements, and trust factors as determinants.
high mixed Models, applications, and limitations of the responsible ado... adoption level/maturity of AI-driven governance systems
Governance approaches are emerging at global, regional and national levels; they vary widely across sectors and jurisdictions, creating opportunities for regulatory experimentation but also risks of fragmentation and regulatory arbitrage.
Cross-jurisdictional comparison of existing/global/regional/national governance instruments and sectoral guidance; gap analysis highlighting heterogeneity.
high mixed AI Governance and Data Privacy: Comparative Analysis of U.S.... degree of regulatory heterogeneity, instances of fragmentation/regulatory arbitr...
Weak formal institutions often coexist with strong informal institutions in African contexts, shaping governance, trust, and enforcement mechanisms in supply chains.
Cross-disciplinary literature review presented in the paper; conceptual argumentation rather than primary empirical analysis.
high mixed Continental shift: operations and supply chain management re... relative strength of formal vs informal institutions and their effects on govern...
Technology effectiveness depends on institutional support (extension, property rights), finance, and local knowledge — technologies are not a silver bullet alone.
Conceptual frameworks and comparative analysis in the review; supporting case studies and program evaluations linking adoption and impact to institutional factors (extension reach, tenure security, access to credit).
high mixed MODERN APPROACHES TO SUSTAINABLE AGRICULTURAL TRANSFORMATION technology adoption rates, realized productivity gains, distribution of benefits...
Productivity gains from generative AI depend on task mix, integration design, and the availability of complementary human skills.
Theoretical evaluation and synthesis of heterogeneous empirical findings; authors highlight variation across firms, sectors, and tasks.
high mixed The Use of ChatGPT in Business Productivity and Workflow Opt... productivity change conditional on task mix/integration/human skills (productivi...
Existing evidence is time-sensitive and heterogeneous: rapidly evolving models, heterogeneous study designs, and many short-term lab/microtask studies limit direct comparability and long-run inference.
Meta-observation from the review: documented methodological limitations across the literature (variation in models, tasks, metrics; prevalence of short-term studies).
high mixed ChatGPT as a Tool for Programming Assistance and Code Develo... generalizability and comparability of empirical findings (study heterogeneity)