The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (5126 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Adoption Remove filter
Widespread adoption of predictive HR tools raises distributional and fairness concerns (algorithmic bias, disparate impacts) and privacy risks that may prompt regulatory responses affecting adoption costs and equilibrium outcomes.
Discussion/implications section raises these risks conceptually; the paper does not empirically measure downstream policy or distributional effects.
speculative negative Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Potential fairness, privacy, and regulatory impacts (theoretical, not measured)
Unclear liability frameworks increase perceived and real costs and can slow adoption by hospitals and insurers.
Policy analyses and procurement narratives noting liability uncertainty cited as a barrier to procurement and deployment.
medium_high negative Human-AI interaction and collaboration in radiology: from co... time-to-adoption, procurement decisions citing liability concerns, insurance/cov...
Up-front implementation costs commonly include procurement, integration with PACS/EMR, UI/UX development, regulatory compliance, and staff training; recurring costs include monitoring, data labeling, software updates, and cybersecurity.
Implementation reports, vendor and hospital accounts, and qualitative studies documenting cost categories (specific dollar amounts vary across settings and are rarely published in detail).
medium_high negative Human-AI interaction and collaboration in radiology: from co... implementation capital expenditures, annual operating expenditures
Uneven organizational supports can concentrate returns to AI in firms and workers that successfully actualize affordances, potentially widening wage and employment disparities; targeted policy and training investments can mitigate these effects.
Theoretical implication from the framework with policy recommendations; no empirical testing or sample reported in the paper.
speculative negative Revolutionizing Human Resource Development: A Theoretical Fr... wage inequality, employment disparities, concentration of AI returns across firm...
At the national level, AI-related innovations are yet to be transformed into measurable economic gains.
Interpretation based on the observed negative association between AI patent counts and GDP growth from the panel regressions (OLS, FE, Difference and System GMM) and theoretical reasoning about adoption/diffusion lags and complementary requirements; empirical support derives from the same models (sample details not provided).
speculative negative The Role of Artificial Intelligence in Economic Growth: Syst... GDP growth (national GDP growth rate)
Research literature synthesis demonstrates 70-75% automation potential.
Quantitative estimate offered by the authors (70-75%) as part of function-by-function analysis; no described empirical evaluation or sample supporting the figure.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent automation potential for research literature synthesis
Knowledge transmission (teaching/lecturing) shows 75-80% AI substitutability.
Authors' quantitative estimate presented in the analysis (75-80%); the paper does not detail empirical methods or validation samples for this percentage.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent substitutability/automation potential of knowledge transmission
Administrative tasks face 75-80% disruption risk from AI.
Paper provides a quantitative estimate (75-80%) as part of its functional disruption assessment; no empirical methodology, dataset, or sample size is described to support the numeric range.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent disruption/substitutability of administrative tasks
Demand-dependent pricing in the modeled energy load management setting creates a social dilemma: everyone would benefit from coordination, but in equilibrium agents often choose to incur congestion costs that cooperative turn-taking would avoid.
Theoretical/modeling analysis of consumer agents scheduling appliance use under demand-dependent pricing as described in the paper (analytical argument and/or model simulations). Specific sample sizes or simulation parameters are not given in the abstract.
medium-high negative Hybrid Human-Agent Social Dilemmas in Energy Markets presence of congestion costs vs coordinated turn-taking (system efficiency/total...
Policy-relevant implication (extrapolated): identity heterogeneity implies family- and purpose-driven entrepreneurs may be less likely to pursue AI-enabled innovation after income shocks, suggesting targeted outreach and low-risk entry paths to avoid widening digital divides.
Extrapolation from documented identity-heterogeneous declines in innovation after income shocks (empirical result) to probable patterns in AI adoption; AI adoption is not directly measured in the paper's dataset.
speculative negative Peer Influence and Individual Motivations in Global Small Bu... likelihood of AI-enabled innovation/adoption (extrapolated)
The United States shows a more market-driven (firm-dominated) patenting profile and comparatively weaker integration between AI and robotics patent trajectories.
Country-level and actor-type decomposition for U.S. patent filings (1980–2019), showing higher firm share of patents and weaker long-run association/cointegration between core AI and AI-enhanced robotics series compared with China (as reported in the paper).
medium-high negative The "Gold Rush" in AI and Robotics Patenting Activity. Do in... share of patents by firms in U.S.; strength of long-run integration between U.S....
There is a risk of a two‑tier market where high‑quality temporal‑preserving enhancements are costly, increasing inequality in experiential welfare and cognitive capital.
Speculative socioeconomic implication based on cost/access arguments and distributional concerns; no inequality modeling or empirical pricing data provided.
speculative negative XChronos and Conscious Transhumanism: A Philosophical Framew... distributional inequality in access to temporal‑quality enhancements and resulti...
Technical expansion without an accompanying theory of lived temporality risks increasing capabilities while degrading the qualitative depth of human experience (presence, attentional flow, felt meaning).
Argumentative claim supported by philosophical analysis and literature synthesis (neurophenomenology, attention economics); no empirical test reported (N/A).
speculative negative XChronos and Conscious Transhumanism: A Philosophical Framew... qualitative depth of human experience (presence, attentional flow, felt meaning)
High-quality, equitable climate information displays public-good characteristics (nonrival, nonexcludable at scale), so private incentives alone will underprovide geographically representative data and shared infrastructure.
Economic reasoning supported by observed concentration of compute and model development (mapping) and standard public-goods theory; no formal empirical market model estimated in the paper.
medium-high negative The Rise of AI in Weather and Climate Information and its Im... Level of provision of geographically representative data/shared infrastructure u...
Improving photorealism with objective color-fidelity metrics and refinement reduces the need for manual color correction and retouching in downstream workflows.
Paper and summary argue this as an implication: higher-fidelity outputs from CFR/CFM reduce manual editing demand. This is an economic/market implication rather than a directly evidenced experimental result in the paper (no labor-market causal study reported).
speculative negative Too Vivid to Be Real? Benchmarking and Calibrating Generativ... demand for manual color correction / retouching services
The paradigm implies potential market risks including vendor lock-in and concentration if only a few providers control scalable linear-optical samplers.
Conceptual risk analysis in the paper's discussion of economic implications; this is a qualitative argument built on the technical premise that trained models require access to specialized quantum sampling hardware for deployment.
speculative negative Universality of Classically Trainable, Quantum-Deployed Boso... market concentration and vendor lock-in risk
Heterogeneous trust levels across firms and schools may produce uneven productivity gains and widen performance gaps.
Logical implication and policy discussion in the paper; the cross-sectional study documents relationships between trust and outcomes but does not provide aggregate diffusion or cross-firm longitudinal evidence to confirm unequal sectoral diffusion.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... distribution of productivity gains / performance gaps across organizations
Overreliance on unvetted AI can propagate biases; economic gains from AI therefore require governance, auditing, and accountability mechanisms.
Framed as a risk and policy recommendation in the discussion; not an empirical finding from the cross-sectional survey reported in the summary.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... propagation of biases and need for governance/auditing (risk outcomes)
If FDI brings capital‑intensive, AI‑enabled production without complementary upskilling, it may exacerbate wage inequality and deepen labor market dualism in SSA.
Theoretical inference and analogy from documented patterns of skill‑biased technological change and FDI-driven inequality in the reviewed literature; empirical evidence specific to AI in SSA is lacking in the review.
speculative negative Foreign Direct Investment, Labor Markets, and Income Distrib... wage inequality, labor market dualism, employment composition
Full replacement of physicians would require breakthroughs in robust generalization, embodied capabilities, and legal/regulatory change—currently lacking.
Conceptual inference based on documented limitations (OOD generalization, lack of embodied/sensorimotor capability, unsettled legal/regulatory environment) summarized in the review.
speculative negative Will AI Replace Physicians in the Near Future? AI Adoption B... feasibility/timeline for physician replacement
Shrinking acquisition workforce capacity functions as a critical scarce input in defense AI economics; reduced human capital lowers the Department's ability to extract value from AI investments and to internalize externalities, decreasing effective returns to AI procurement.
Institutional trend evidence of workforce reductions combined with economic analysis treating institutional capacity as an input factor. No empirical quantification of returns or elasticity provided—this is analytical inference.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... effective returns to AI procurement given acquisition workforce capacity (theore...
Ambiguous standards increase uncertainty for contracting officers, raising the risk that they will either over-rely on vendor claims or inconsistently enforce requirements, both of which harm procurement integrity.
Policy-text analysis identifying vague criteria combined with qualitative analysis of procurement decision workflows; argument based on measurement and enforcement friction literature. No empirical study of contracting officer behavior provided.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... consistency and reliability of contracting officer enforcement and reliance on v...
Lower governance barriers and ambiguous procurement criteria (e.g., undefined 'model objectivity') can skew market competition toward suppliers that prioritize rapid iteration and opaque practices over rigorous assurance, harming traceability and quality.
Market-effects reasoning grounded in policy changes (document analysis) and qualitative institutional analysis of measurement/enforcement frictions. No market-share or supplier-behavior data provided.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... market composition and supplier incentives (favoring speed/opacity vs. assurance...
Mandating permissive contract terms and enabling waivers reduces private incentives for contractors to invest in safety and compliance, creating classical moral-hazard problems in defense AI procurement.
Economic reasoning and principal–agent analysis applied to the documented contractual changes (primary-source policy text). No empirical measurement of contractor investment behavior provided; claim is theoretical/inferential.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... contractor incentives to invest in safety and compliance (theoretical inference)
A mismatch between expanded waiver authority (Barrier Removal Board) and declining acquisition oversight capacity creates procurement-integrity and systemic risks: faster acquisition concurrent with weakened institutional checks increases likelihood of improper procurement decisions and unchecked deployment of unsafe or unvetted AI models.
Synthesis of primary-source policy analysis, institutional staffing trend evidence, and qualitative risk/scenario assessment using principal–agent and moral-hazard frameworks. This is a conceptual risk projection rather than an empirically derived probability estimate.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... probability and nature of procurement-integrity failures and deployments of unsa...
Emerging agentic/AGI capabilities introduce new failure modes and governance challenges that standard ML oversight may not cover.
Emerging literature, theoretical analyses, and expert opinion summarized in the synthesis; authors note limited empirical long-term data and characterize this as an emergent risk.
speculative negative Framework for Government Policy on Agentic and Generative AI... governance risk / novel failure modes
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
speculative negative ChatGPT as a Tool for Programming Assistance and Code Develo... market concentration measures (e.g., HHI), indicators of vendor lock-in (switchi...
If many firms adopt AI generation without matching verification, aggregate fragility in software-dependent infrastructure could rise, increasing downtime costs and systemic economic risk.
Macro-level risk projection and system fragility argument in the paper; no macroeconomic modeling or empirical scenario analysis provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... aggregate system fragility metrics (downtime, outage frequency/severity), econom...
DAR dynamics (authority states, hysteresis, safe-exit times) introduce path-dependence and switching costs that should be treated as state variables in production and decision models of human–AI joint work.
Theoretical implications section arguing these elements add path-dependence and switching costs to economic/production models; analytic reasoning, not empirical measurement.
medium-high negative Human–AI Handovers: A Dynamic Authority Reversal Framework f... switching_costs; path_dependence_indicators; effect_on_throughput
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
speculative negative The Effectiveness of ChatGPT in Customer Service and Communi... market concentration indicators and barriers to entry related to AI integration ...
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
low-medium negative Towards Responsible Artificial Intelligence Adoption: Emergi... indicators of indigenous knowledge retention, measures of cultural alignment of ...
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
medium-high negative Towards Responsible Artificial Intelligence Adoption: Emergi... fairness metrics, disparate error rates, incidence of discriminatory outcomes fo...
Human reviewers may over-trust machine-generated language and explanations (automation bias), reducing the likelihood of detecting fraudulent outputs.
Reference to automation-bias literature and conceptual examples; threat modeling and illustrative vignettes in the article.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... detection rate of fraudulent outputs by human reviewers when outputs are machine...
Existing internal audit and compliance frameworks focus on access, transaction, and system controls, not on content-generation integrity.
Literature and standards review combined with threat-control mapping demonstrating gaps in content/provenance coverage.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... coverage of content-generation integrity within existing audit/compliance framew...
AI systems and economic models are biased toward European languages because of lack of vernacular corpora; investing in high-quality corpora for African vernaculars (e.g., Cameroon Pidgin) is necessary to avoid misallocation of resources.
Policy implication extrapolated from the study's finding that vernacular mediation materially affects outcomes, combined with general knowledge about data-driven AI bias; no empirical AI-modeling tests in the paper.
speculative negative (current state) / positive (recommended investment) From Linguistic Hybridity to Development Sovereignty: Pidgin... AI model performance and allocation bias (inferred, not measured)
The introduction of cognitive technologies into business processes sets new requirements for market opportunity analytics, and digital analytics makes it possible to accurately measure its impact on business models and innovative solutions.
Conceptual statement in the paper's introduction; no empirical test or numerical evidence provided in the excerpt.
speculative null result Innovative Cognitive Tools for Studying Market Opportunities... accuracy/capability of market opportunity analytics to measure impact of cogniti...
There are research opportunities to measure returns to 'teaching' (causal impact of configuring agents on human skill accumulation and earnings) and to model agent-platform ecosystems with network effects, spillovers, and endogenous quality hierarchies.
Author-stated research agenda and proposed empirical questions derived from the observed phenomena; not empirical results but recommended directions.
speculative null result When Openclaw Agents Learn from Each Other: Insights from Em... need for future causal estimates of returns to teaching and formal models of eco...
Future research should quantify calibration and skill of LLMs over longer horizons, develop ensembles that pair LLMs with domain specialists, and expand temporally grounded benchmarks across different conflict types.
Authors' stated research agenda and limitations: calls for longer-horizon calibration studies and broader benchmarking derived from observed domain heterogeneity and the scope of the present snapshot.
speculative null result When AI Navigates the Fog of War future research outputs (calibration metrics, ensemble methods, expanded benchma...
Recommended research priorities include hierarchical/temporal-decomposition methods, continual learning, robust adaptation to non-stationarity, and causal/structured reasoning to handle multi-factor interactions.
Paper discussion linking observed failure modes to methodological gaps and proposing research directions to address limitations; these are recommendations rather than experimentally validated claims.
speculative null result RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... suggested research directions to improve robustness (proposed, not empirically v...
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
speculative null result Hierarchical Reinforcement Learning Based Human-AI Online Di... regulatory requirements / safety validation (anticipated, not measured)
Empirical economics research should use firm-level and pipeline microdata and quasi-experimental designs to estimate causal effects of AI adoption on outcomes like time-to-hit, preclinical attrition, IND filings, and NME approvals per R&D dollar.
Research recommendation offered in the paper based on identified gaps; not an evidence claim but an explicit methodological suggestion.
speculative null result Learning from the successes and failures of early artificial... recommended empirical outcomes to be measured: time-to-hit, preclinical attritio...
Policy does not predict individuals' intent to increase usage but functions as a marker of maturity—formalizing successful diffusion by Enthusiasts while acting as a gateway the Cautious have yet to reach.
Analysis of a policy variable within the survey dataset (N=147) showing no predictive relationship with individual intent to increase AI use, but an association between presence of policy and indicators of organizational adoption/maturity and differential reach into archetype groups.
medium-low null result Developers in the Age of AI: Adoption, Policy, and Diffusion... Individual intent to increase usage; organizational policy presence; organizatio...
Prospective studies are needed to evaluate AI's real-world clinical impact in acute GIB.
Authors' recommendation in the discussion and conclusion based on the predominance of retrospective evidence and few prospective/RCTs.
speculative null result How Do AI-Assisted Diagnostic Tools Impact Clinical Decision... need for prospective evaluation of clinical impact (recommendation)
The study recommends iterative prompt refinement, integration with adaptive learning models, and further exploration of autonomous self-prompting mechanisms.
Concluding recommendations derived from the study's results and interpretation; presented as future directions rather than empirically tested interventions within this study.
speculative null result Prompt Engineering for Autonomous AI Agents: Enhancing Decis... recommendations for methods and research directions (not an empirical outcome me...
Future research should explore sector-specific AI adoption challenges and long-term workforce adaptation strategies.
Author recommendation presented in the paper's discussion/future work section of the summary.
speculative null result Artificial intelligence and organisational transformation: t... N/A (recommended future research topics)
Recommended future research includes scalable interoperability solutions, longitudinal lifecycle value validation, human‑centred adoption strategies, and sustainability assessment methods.
Authors' explicit recommendations at the end of the review based on identified gaps in the literature.
speculative null result Digital Twins Across the Asset Lifecycle: Technical, Organis... priority research areas to address current evidence gaps
Researchers should combine qualitative studies with administrative/matched employer–employee data and experimental/quasi-experimental designs (pilot rollouts, staggered adoption) to identify causal effects of AI on tasks, productivity, and wages.
Methodological recommendation by authors based on limitations of their qualitative study (15 UX designers) and the need to quantify observed phenomena; not an empirical claim tested in the paper.
speculative null result The Values of Value in AI Adoption: Rethinking Efficiency in... recommended measurement approaches for causal identification (task allocation, p...
Recommended research directions: combine neural summary networks with explicit uncertainty modules (e.g., conditional normalizing flows), benchmark against classical econometric estimators, explore transfer learning for pre-trained estimators, and study interpretability and sensitivity to misspecification.
Authors' recommendations based on limitations and implications discussed in the paper; these are forward-looking propositions rather than empirically supported claims.
speculative null result ForwardFlow: Simulation only statistical inference using dee... research agenda items (qualitative recommendations)
Future research priorities include obtaining causal estimates (e.g., field experiments) of productivity gains from trust-mediated AI adoption and conducting cost–benefit analyses of trust-building interventions.
Study’s stated research agenda/recommendations; not an empirical claim but a recommended direction for follow-up research.
speculative null result Algorithmic Trust and Managerial Effectiveness: The Role of ... causal productivity estimates and cost–benefit outcomes (research recommendation...
Key research priorities include improving measurement of AI usage across countries, causal identification of long-run effects, and sectoral reskilling strategy evaluation.
Identified gaps and methodological limitations in the reviewed empirical literature (measurement heterogeneity, limited long-run panels, sectoral variation) motivating suggested future research agenda.
speculative null result S-TCO: A Sustainable Teacher Context Ontology for Educationa... quality and scope of future empirical evidence on AI economic effects