The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (7156 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
AI will substantially restructure labor markets.
Task-based theoretical approach and cross-sectoral synthesis of empirical studies showing task substitution and complementarity effects across occupations and sectors.
high mixed Artificial Intelligence, Automation, and Employment Dynamics... occupational composition, sectoral employment shares, task mix
The pandemic produced a 1.5% increase in people identifying as potential entrepreneurs but a 2.3% contraction in emerging entrepreneurs, indicating a breakdown in converting aspiration into formal entrepreneurial activity (pipeline disruption).
Reported percentage changes in pipeline stages (potential entrepreneurs and emerging entrepreneurs) measured in the survey before/after (or during) the pandemic within the >27,000 respondent sample; comparison of identification and transition rates along the entrepreneurial pipeline.
high mixed Peer Influence and Individual Motivations in Global Small Bu... transitions along the entrepreneurial pipeline (identification as potential entr...
Scholarly production, institutional incentives, funding, and the Cold War geopolitical context shaped which economic theories became prominent.
Historical institutional case study drawing on archives, correspondence, publication records, and contemporaneous debates to link institutional and funding environments to intellectual trajectories.
high mixed Ideological competition during the era of the 20th century c... prominence of economic theories (qualitative assessment tied to institutional/fu...
Whether AI increases or decreases overall inequality depends on AI’s technology structure (proprietary vs. commodity) and on labor-market institutions (rent‑sharing elasticity ξ and asset concentration).
Comparative statics and regime analysis within the calibrated model that varies the technological-form parameter (η1 vs. η0) and the rent‑sharing elasticity ξ, as well as measures of asset concentration.
high mixed When AI Levels the Playing Field: Skill Homogenization, Asse... aggregate inequality (ΔGini) as a function of technology form and institutional ...
AI can equalize individual task performance while increasing aggregate inequality because rents accrue to owners of complementary assets rather than to workers.
Analytical model and calibrated simulations demonstrating that within-task compression (reduced worker dispersion) can coexist with rising aggregate inequality (ΔGini) owing to rent concentration at the firm/asset-owner level.
high mixed When AI Levels the Playing Field: Skill Homogenization, Asse... within-task performance dispersion (decrease) and aggregate inequality (ΔGini, i...
Long-run integration (degree of long-run association) between core AI and AI-enhanced robotics differs systematically across national innovation systems.
Country-level decomposition of patent filing series and time-series econometric tests for long-run relationships / cointegration between core AI and AI-enhanced robotics patent series for each country/region (China, U.S., Europe, Japan, South Korea).
high mixed The "Gold Rush" in AI and Robotics Patenting Activity. Do in... measures of long-run association/cointegration between core AI and AI-enhanced r...
Core AI, traditional robotics, and AI-enhanced robotics follow distinct historical trajectories over 1980–2019 and do not move together uniformly.
Time-series analysis using annual patent filing counts (1980–2019) for each domain; tests for common long-run relationships / co-movement across the three patent series (as reported in the paper). Country-aggregated and domain-specific patent time series were analyzed; exact sample size (total patents) not specified in the summary.
high mixed The "Gold Rush" in AI and Robotics Patenting Activity. Do in... annual patent filing counts/time-series trajectories for each of the three domai...
Kondratieff, Schumpeter, and Mandel each highlight different drivers of capitalist long waves: Kondratieff emphasizes regular technological-driven renewal, Schumpeter emphasizes entrepreneurship and innovation-led creative destruction, and Mandel emphasizes class relations and production structures.
Comparative theoretical analysis and literature synthesis across the three schools; conceptual summary of canonical positions (no original dataset; qualitative interpretation).
high mixed Economic Waves, Crises and Profitability Dynamics of Enterpr... theoretical drivers of capitalist cycles
The study's qualitative and exploratory design limits generalizability; the proposed framework requires quantitative testing and broader samples (practicing architects, firms, cross-cultural contexts).
Explicit limitations stated by authors; study is based on semi-structured interviews with architecture students (N unspecified) and inductive thematic analysis.
high mixed Human–AI Collaboration in Architectural Design Education: To... generalizability / external validity of findings and framework
XChronos reframes transhumanist technology evaluation in experiential terms, creating both market opportunities and measurement/regulatory challenges for AI economics.
Synthesis and concluding argument in the paper summarizing proposed implications; conceptual reasoning without empirical tests.
high mixed XChronos and Conscious Transhumanism: A Philosophical Framew... shift in evaluation criteria toward experiential measures and resultant market/r...
Across 182 reviewed studies, LLM-generated synthetic participants have modest and inconsistent fidelity to human participants.
Systematic review and synthesis of 182 empirical and methodological studies comparing LLM-generated participants to human samples; studies were coded and analyzed for fidelity outcomes.
high mixed Synthetic Participants Generated by Large Language Models: A... fidelity of synthetic participants to human participants (behavioral/response si...
Participant targeting: 44% of programs targeted doctors and 44% targeted medical students (with possible overlap), and 56% targeted entry‑to‑practice career stages.
Participant audience and career-stage data extracted from the 27 included programs; proportions reported in the review.
high mixed Assessing the effectiveness of artificial intelligence educa... target audience (doctors, medical students) and career stage distribution (entry...
Most programs were delivered in academic settings: 56% of evaluated programs reported an academic setting.
Setting information extracted from the 27 included programs, with 56% reported as delivered in academic settings.
high mixed Assessing the effectiveness of artificial intelligence educa... program delivery setting (academic vs non-academic)
A plurality of programs were short in duration: 44% of programs were categorized as short courses.
Extraction of program length from the 27 included studies; 44% were classified as short courses per the review's categorization.
high mixed Assessing the effectiveness of artificial intelligence educa... program duration (short vs longer formats)
Most programs were introductory in content: 67% of included programs taught introductory AI concepts rather than advanced/technical AI skills.
Program content extraction across the 27 included studies yielded that 67% were classified as teaching introductory AI.
high mixed Assessing the effectiveness of artificial intelligence educa... program content focus (introductory vs advanced/technical AI skills)
The methodological landscape of the evidence base is heterogeneous, consisting of cross-sectional surveys, case studies, quasi-experimental designs, and a limited number of longitudinal analyses.
Study design information was extracted from the 145 included studies revealing a mix of designs and relatively few longitudinal or experimental studies.
high mixed Digital transformation and its relationship with work produc... study design types (cross-sectional, case study, quasi-experimental, longitudina...
Human factors (training, trust calibration, workflows) determine whether clinicians accept, override, or ignore GenAI suggestions.
Qualitative and quantitative human-AI interaction studies and pilot deployments discussed in the paper; specific sample sizes and effect sizes are not reported in the paper.
high mixed GenAI and clinical decision making in general practice override/acceptance rates; clinician-reported trust and cognitive load; adherenc...
Safety and net benefit of GenAI CDS hinge on deployment details: user interface, real-time feedback, uncertainty quantification, calibration, and how recommendations are presented (strong vs. suggestive).
Human factors and implementation studies referenced; early A/B tests and human-AI interaction research suggest interface and presentation affect acceptance and error rates; no large-scale standardized implementation trial data cited.
high mixed GenAI and clinical decision making in general practice acceptance/override rates; error rates; calibration metrics; clinician trust
Reimbursement models (fee-for-service vs. capitation) will influence whether cost savings from GenAI are realized or offset by increased service volume.
Economic incentive framework and prior health-economics literature cited; the paper does not provide direct empirical tests but references plausible incentive channels.
high mixed GenAI and clinical decision making in general practice total spending; per-patient cost; service volume under different payment models
RL and adaptive methods are good for real-time adaptation but can be myopic, require large amounts of interaction data, and struggle to incorporate long-term preference structure and ethical constraints.
Surveyed properties of reinforcement learning and adaptive methods in HRI/RS literature; no new empirical evaluation in this paper.
high mixed Reimagining Social Robots as Recommender Systems: Foundation... real-time adaptation effectiveness, sample efficiency (amount of interaction dat...
Key tradeoffs in contemporary financing models include speed/flexibility versus regulatory coverage and long‑term cost, and data reliance versus privacy/fairness.
Multi‑criteria comparative evaluation and conceptual analysis across financing models; synthesis draws on regulatory context and observed product features rather than primary quantitative tradeoff estimation.
high mixed Traditional vs. contemporary financing models for MSMEs and ... tradeoff between speed/flexibility and regulatory protection/cost; tradeoff betw...
Performance of structure prediction models scales with data, model size, and compute; there are tradeoffs between accuracy and inference speed/simplicity.
Paper explicitly states scaling behavior and tradeoffs in 'Compute and training' and 'Representative models' sections; no precise scaling curves or thresholds are provided in the text.
high mixed Protein structure prediction powered by artificial intellige... model predictive performance as a function of training data volume, model size, ...
The United States' decentralized education system produces tensions between local innovation and federal accountability, with active debates over data and privacy laws shaping responses to AI in assessment.
Case study of U.S. policy and secondary literature documenting federal-state-local governance dynamics and ongoing legal/policy debates; descriptive evidence from public documents.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... policy tension between innovation and accountability; data/privacy regulation ac...
China's centralized control enables rapid piloting of AI-supported assessment but raises concerns over surveillance and data governance.
Country case study using Chinese policy texts and secondary analyses describing centralized education governance and data-governance practices; illustrative rather than empirical.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... speed of piloting AI assessment and surveillance/data-governance risk
India faces pressure to maintain high-stakes exams amid uneven digital access and is experimenting with blended formative tools.
Country-specific case study based on policy documents and secondary literature describing India's exam system and early technology initiatives; no primary survey/sample size.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... policy stance on high-stakes exams and digital access disparities
Four national case studies (India, China, the United States, Canada) illustrate diverse national responses to AI in assessment shaped by governance structures, resource constraints, cultural attitudes, and political pressures.
Cross-national comparative analysis using publicly available policy texts, recent reforms, and secondary literature for each country; descriptive, illustrative cases rather than exhaustive or representative samples.
high mixed The Future of Assessment: Rethinking Evaluation in an AI-Ass... national policy responses and governance approaches
Important tradeoffs exist (privacy vs. utility; centralized vs. federated data architectures; automated moderation vs. freedom of expression; cost/complexity of secure hardware) that must be balanced in VR security design.
Comparative evaluation across the reviewed corpus (31 studies) identifying recurring ethical and technical tradeoffs; authors discuss these qualitatively.
high mixed Securing Virtual Reality: Threat Models, Vulnerabilities, an... direction and magnitude of tradeoffs between privacy, utility, governance, and c...
Across the EU, Algeria, and Pakistan there is convergent recognition of dual‑use risks, increasing use of export controls, and interest in developing domestic AI capacity.
Cross‑jurisdictional synthesis of national/supranational legal texts, export‑control policies, and policy documents showing discussion of dual‑use issues and capacity building.
high mixed <b>Regulating AI in National Security: A Comparative S... presence of policy recognition and instruments addressing dual‑use risks, export...
The community knowledge functions both as practical how-to guidance and as collective experimentation with platform rules and revenue mechanisms.
Observed dual nature in the 377-video corpus: instructional workflows alongside demonstrations/testing of platform-tailored monetization tactics and workarounds.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... co-occurrence of instructional content and platform-experimentation practices
Typical practices emphasized by creators include rapid mass production of content, productizing prompt engineering, repurposing existing material via synthesis/localization, and packaging AI outputs as sellable creative services or assets.
Recurring practices surfaced through qualitative coding of workflows, tools, and pipelines described in the 377 videos.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... presence and frequency of recommended production and productization practices
Across the 377 videos, creators converge on a set of repeatable use cases and platform‑tailored monetization tactics.
Thematic coding of 377 videos produced a catalog of recurring use cases and tactics; the paper reports convergence across that sample.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... frequency and recurrence of specific use cases and monetization tactics in the s...
YouTube creators have collectively constructed and circulated a practical knowledge repository about how to monetize GenAI-driven creative work.
Systematic qualitative content analysis (thematic coding) of 377 publicly available YouTube videos in which creators promote GenAI workflows and monetization strategies.
high mixed Monetizing Generative AI: YouTubers' Collective Knowledge on... presence and characteristics of a community knowledge repository (practical guid...
Citation counts across repeated samples follow a power-law (heavy-tailed) distribution: a few domains are cited often while many domains are cited rarely.
Empirical distributional analysis of citation counts from repeated samples collected across the three platforms and three topics (multi-day and high-frequency regimes); observed heavy-tailed / power-law fit to citation-count distribution.
high mixed Quantifying Uncertainty in AI Visibility: A Statistical Fram... distribution of citation counts per domain (frequency of domain citations)
Emotional redirection is common: 33% of fear-tagged posts receive joy-tagged responses.
Post–response emotion transition analysis using the emotion-labeled dataset; calculation of conditional probability that responses to fear-tagged posts are labeled joy (observed rate ≈33%) in Moltbook threads.
high mixed What Do AI Agents Talk About? Emergent Communication Structu... proportion of responses to fear-tagged posts that are joy-tagged (emotion transi...
Self-reflective discussion was concentrated in Science & Technology and Arts & Entertainment topical categories, while Economy & Finance threads showed no self-referential content.
Topic modeling and manual/automatic tagging of self-referential themes across identified topical categories within the Moltbook dataset; category-level counts showing presence/absence of self-referential tags (dataset: 361,605 posts).
high mixed What Do AI Agents Talk About? Emergent Communication Structu... presence and concentration (%) of self-referential content by topical category
The topology of service-dependency graphs (modelled as DAGs of compute stages) is a first-order determinant of whether decentralised, price-based resource allocation will be stable and scalable.
Systematic ablation study using simulation: 1,620 runs total across six experiment types, sweeping graph topology (hierarchical vs cross-cutting), load, hybrid integrator presence, and governance constraints; metrics included price convergence/volatility and allocation throughput/quality. Effect sizes reported in the paper show topology had the largest impact on price stability and scalability.
high mixed Real-Time AI Service Economy: A Framework for Agentic Comput... price convergence / price volatility and system scalability (throughput and allo...
Choice of scaffold materially affects outcomes: an open-source scaffold outperformed vendor-provided scaffolds by up to approximately 5 percentage points.
Comparative experiments across three scaffolding approaches (vendor scaffolds and at least one open-source scaffold) showing up to ~5 percentage point differences in measured outcomes.
high mixed Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... performance_difference_across_scaffolds (detection/exploitation_rates_difference...
Adoption of NFD approaches in regulated domains will depend on standards for validation, auditability, and update procedures.
Implications and governance discussion emphasizing regulatory constraints (finance, healthcare) and the need for validation/audit standards; logical/ normative claim rather than empirical finding.
high mixed Nurture-First Agent Development: Building Domain-Expert AI A... adoption rate in regulated domains conditional on available validation/audit sta...
Limitations include generalizability beyond Chatbot Arena data, calibration of priors on novel tasks, audit costs/latency, user comprehension/cognitive load, and strategic manipulation.
Authors' stated limitations and open questions; these are candid acknowledgements rather than empirical findings.
high mixed Task-Aware Delegation Cues for LLM Agents generalizability, calibration, audit cost/latency, user comprehension, susceptib...
Absence of irreducibility, positive recurrence, or aperiodicity in the state dynamics can produce non-ergodic reward behavior.
Theoretical argument and examples in the paper illustrating how breakdowns of these chain conditions lead to multiple invariant measures or absorbing regimes; analysis-based evidence.
high mixed Ergodicity in reinforcement learning presence of non-ergodic long-run reward behavior (e.g., multiple invariant measu...
Standard Markov chain ergodicity conditions (irreducibility, positive recurrence, aperiodicity) imply ergodic reward processes when rewards depend only on the chain state.
Formal mapping in the paper between Markov-chain ergodicity properties and reward-process ergodicity; theoretical derivation (no empirical sample).
high mixed Ergodicity in reinforcement learning ergodicity of reward process (equivalence to chain ergodicity when rewards are s...
Non-ergodic processes admit path-dependent long-run behavior (e.g., absorbing sets, multiple invariant measures, path-dependent reinforcement), so different runs with the same policy can have different long-run averages.
Analytic discussion of Markov-chain examples and theory plus the paper's illustrative constructed example showing path-dependent locking into regimes; theoretical and example-driven evidence.
high mixed Ergodicity in reinforcement learning variance across realized long-run average rewards across trajectories under the ...
Ergodic reward processes are those where time averages along almost every long trajectory converge to the same value as the ensemble average.
Formal definition and discussion in the paper mapping ergodicity concepts from stochastic processes to reward processes; theoretical exposition.
high mixed Ergodicity in reinforcement learning convergence of time-average reward to ensemble average
The model explicitly separates competition into two stages: discovery (first-passage to resource patches) and monopolization (local takeover and stabilization).
Model specification in the paper: stochastic, spatially-structured population model with distinct discovery and monopolization dynamics; this is a modeling assumption/structure rather than empirical measurement.
high mixed Macroscopic Dominance from Microscopic Extremes: Symmetry Br... conceptual/structural decomposition of competitive dynamics into 'discovery' and...
Two qualitatively distinct mechanisms underlie observed dominance: (1) extreme-event-mediated lucky discovery (transient), and (2) mechanistic asymmetries (non-reciprocal biases) that convert lucky discovery into permanent dominance.
Conceptual separation in the model structure (discovery vs monopolization phases), analytic results on first-passage extreme events, and absorbing-state analysis showing necessity of asymmetry for permanence; supported by simulations demonstrating the two-stage behavior. The claim is theoretical.
high mixed Macroscopic Dominance from Microscopic Extremes: Symmetry Br... mechanism producing dominance (transient early advantage vs permanence via asymm...
RAD requires estimating cost distributions and choosing a reference policy and quantile-weighting function; these choices determine the method's conservatism and sample efficiency.
Methodological and practical considerations discussed in the paper; noted dependency on estimation and design choices (no quantitative sample-efficiency results provided in the summary).
high mixed Safe RLHF Beyond Expectation: Stochastic Dominance for Unive... method conservatism (relative safety level) and sample efficiency (amount of dat...
Explanations change workflows, shift responsibilities between humans and machines, and can reshape power dynamics—creating both opportunities (better oversight) and risks (over-reliance, gaming).
Qualitative and conceptual studies synthesized in the review, including socio-technical analyses and case studies reporting observed or theorized workflow and responsibility shifts; no meta-analytic causal estimate.
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... workflows, responsibility allocation, power dynamics, oversight quality
Explanations increase user trust principally when they are understandable, actionable, and aligned with users’ domain knowledge; opaque or overly technical explanations can fail to build trust or even decrease it.
Thematic synthesis of empirical and conceptual studies in the reviewed literature reporting conditional effects of explanation form and comprehensibility on trust; review notes heterogeneity in study designs and contexts.
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... user trust / changes in trust toward AI outputs
Explainability improves perceived legitimacy, user trust, and organizational accountability only when technical transparency is paired with human-centered explanation design and governance mechanisms.
Synthesis of studies from the reviewed literature showing conditional effects of algorithmic interpretability combined with explanation design and governance; derived via thematic coding across technical and social-science sources (no new primary experimental data reported).
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... perceived legitimacy, user trust, organizational accountability
Explainability is a necessary but not sufficient condition for trustworthy AI in high-stakes domains.
Systematic literature review (thematic coding and synthesis) of interdisciplinary scholarship (peer-reviewed research, technical reports, policy documents); the paper synthesizes conceptual and empirical studies rather than presenting new primary data. Emphasis on high-stakes domains (healthcare, finance, public sector).
high mixed Explainable AI in High-Stakes Domains: Improving Trust, Tran... overall trustworthiness of AI systems in high-stakes domains (multidimensional c...