The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (11633 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Lower governance barriers and ambiguous procurement criteria (e.g., undefined 'model objectivity') can skew market competition toward suppliers that prioritize rapid iteration and opaque practices over rigorous assurance, harming traceability and quality.
Market-effects reasoning grounded in policy changes (document analysis) and qualitative institutional analysis of measurement/enforcement frictions. No market-share or supplier-behavior data provided.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... market composition and supplier incentives (favoring speed/opacity vs. assurance...
Mandating permissive contract terms and enabling waivers reduces private incentives for contractors to invest in safety and compliance, creating classical moral-hazard problems in defense AI procurement.
Economic reasoning and principal–agent analysis applied to the documented contractual changes (primary-source policy text). No empirical measurement of contractor investment behavior provided; claim is theoretical/inferential.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... contractor incentives to invest in safety and compliance (theoretical inference)
A mismatch between expanded waiver authority (Barrier Removal Board) and declining acquisition oversight capacity creates procurement-integrity and systemic risks: faster acquisition concurrent with weakened institutional checks increases likelihood of improper procurement decisions and unchecked deployment of unsafe or unvetted AI models.
Synthesis of primary-source policy analysis, institutional staffing trend evidence, and qualitative risk/scenario assessment using principal–agent and moral-hazard frameworks. This is a conceptual risk projection rather than an empirically derived probability estimate.
speculative negative FEATURE COMMENT: Governance as a "Blocker": How the Pentagon... probability and nature of procurement-integrity failures and deployments of unsa...
Emerging agentic/AGI capabilities introduce new failure modes and governance challenges that standard ML oversight may not cover.
Emerging literature, theoretical analyses, and expert opinion summarized in the synthesis; authors note limited empirical long-term data and characterize this as an emergent risk.
speculative negative Framework for Government Policy on Agentic and Generative AI... governance risk / novel failure modes
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
speculative negative ChatGPT as a Tool for Programming Assistance and Code Develo... market concentration measures (e.g., HHI), indicators of vendor lock-in (switchi...
If many firms adopt AI generation without matching verification, aggregate fragility in software-dependent infrastructure could rise, increasing downtime costs and systemic economic risk.
Macro-level risk projection and system fragility argument in the paper; no macroeconomic modeling or empirical scenario analysis provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... aggregate system fragility metrics (downtime, outage frequency/severity), econom...
This reversal of the burden of proof creates moral-hazard-like behavior: incentives for speed reduce verification effort.
Theoretical argument built on the micro-coercion mechanism and economic reasoning; no empirical validation provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... verification effort per artifact (e.g., reviewer time), proportion of unchecked ...
Under time pressure, developers adopt an implicit default of accepting plausible machine outputs unless they can disprove them (the 'micro-coercion of speed'), effectively reversing the burden of proof.
Behavioral mechanism posited from descriptive reasoning and thought experiments; no behavioral experiments, surveys, or observational data reported.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... developer acceptance rate of machine-generated outputs under time pressure; rate...
DAR dynamics (authority states, hysteresis, safe-exit times) introduce path-dependence and switching costs that should be treated as state variables in production and decision models of human–AI joint work.
Theoretical implications section arguing these elements add path-dependence and switching costs to economic/production models; analytic reasoning, not empirical measurement.
medium-high negative Human–AI Handovers: A Dynamic Authority Reversal Framework f... switching_costs; path_dependence_indicators; effect_on_throughput
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
speculative negative The Effectiveness of ChatGPT in Customer Service and Communi... market concentration indicators and barriers to entry related to AI integration ...
Rich contextual memories and continuous home interaction create valuable data streams that could enable firms to capture substantial value, raising concerns about data governance, consent, and monetization.
Authors' policy and economic implications discussion noting that MMCM-like memories generate valuable data; this is a conceptual/policy claim rather than empirically tested within the study.
speculative negative Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Data generation and value-capture potential (qualitative implication)
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
low-medium negative Towards Responsible Artificial Intelligence Adoption: Emergi... indicators of indigenous knowledge retention, measures of cultural alignment of ...
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
medium-high negative Towards Responsible Artificial Intelligence Adoption: Emergi... fairness metrics, disparate error rates, incidence of discriminatory outcomes fo...
Human reviewers may over-trust machine-generated language and explanations (automation bias), reducing the likelihood of detecting fraudulent outputs.
Reference to automation-bias literature and conceptual examples; threat modeling and illustrative vignettes in the article.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... detection rate of fraudulent outputs by human reviewers when outputs are machine...
Existing internal audit and compliance frameworks focus on access, transaction, and system controls, not on content-generation integrity.
Literature and standards review combined with threat-control mapping demonstrating gaps in content/provenance coverage.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... coverage of content-generation integrity within existing audit/compliance framew...
AI systems and economic models are biased toward European languages because of lack of vernacular corpora; investing in high-quality corpora for African vernaculars (e.g., Cameroon Pidgin) is necessary to avoid misallocation of resources.
Policy implication extrapolated from the study's finding that vernacular mediation materially affects outcomes, combined with general knowledge about data-driven AI bias; no empirical AI-modeling tests in the paper.
speculative negative (current state) / positive (recommended investment) From Linguistic Hybridity to Development Sovereignty: Pidgin... AI model performance and allocation bias (inferred, not measured)
The introduction of cognitive technologies into business processes sets new requirements for market opportunity analytics, and digital analytics makes it possible to accurately measure its impact on business models and innovative solutions.
Conceptual statement in the paper's introduction; no empirical test or numerical evidence provided in the excerpt.
speculative null result Innovative Cognitive Tools for Studying Market Opportunities... accuracy/capability of market opportunity analytics to measure impact of cogniti...
Using calibrated, employee-level predictions enables marginal-cost analyses and prioritization (micro-targeting) to improve retention-efficiency versus uniform, across-the-board policies.
Methodological argument: calibrated individual probabilities plus counterfactual impact estimates enable ranking employees by expected gain from interventions and thus marginal-cost prioritization (no empirical cost–benefit calculations provided).
speculative null result Explainable AI for Employee Retention in Green Human Resourc... potential efficiency gains in retention resource allocation (theoretical outcome...
There are research opportunities to measure returns to 'teaching' (causal impact of configuring agents on human skill accumulation and earnings) and to model agent-platform ecosystems with network effects, spillovers, and endogenous quality hierarchies.
Author-stated research agenda and proposed empirical questions derived from the observed phenomena; not empirical results but recommended directions.
speculative null result When Openclaw Agents Learn from Each Other: Insights from Em... need for future causal estimates of returns to teaching and formal models of eco...
Future research should quantify calibration and skill of LLMs over longer horizons, develop ensembles that pair LLMs with domain specialists, and expand temporally grounded benchmarks across different conflict types.
Authors' stated research agenda and limitations: calls for longer-horizon calibration studies and broader benchmarking derived from observed domain heterogeneity and the scope of the present snapshot.
speculative null result When AI Navigates the Fog of War future research outputs (calibration metrics, ensemble methods, expanded benchma...
Recommended research priorities include hierarchical/temporal-decomposition methods, continual learning, robust adaptation to non-stationarity, and causal/structured reasoning to handle multi-factor interactions.
Paper discussion linking observed failure modes to methodological gaps and proposing research directions to address limitations; these are recommendations rather than experimentally validated claims.
speculative null result RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... suggested research directions to improve robustness (proposed, not empirically v...
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
speculative null result Hierarchical Reinforcement Learning Based Human-AI Online Di... regulatory requirements / safety validation (anticipated, not measured)
Broader implication for AI economics: firm-level attention allocation, nonlinearities, thresholds, and governance/incentive design should be incorporated into economic models of AI adoption because AI's effects on workers and CSR are not monotonic and depend on industry and governance.
Synthesis of empirical findings (inverted U and moderator effects) and theoretical argument; recommended direction for future modeling and empirical work stated in the paper.
speculative null result Attention to Whom? AI Adoption and Corporate Social Responsi... N/A (theoretical/modeling implication)
Empirical economics research should use firm-level and pipeline microdata and quasi-experimental designs to estimate causal effects of AI adoption on outcomes like time-to-hit, preclinical attrition, IND filings, and NME approvals per R&D dollar.
Research recommendation offered in the paper based on identified gaps; not an evidence claim but an explicit methodological suggestion.
speculative null result Learning from the successes and failures of early artificial... recommended empirical outcomes to be measured: time-to-hit, preclinical attritio...
Policy does not predict individuals' intent to increase usage but functions as a marker of maturity—formalizing successful diffusion by Enthusiasts while acting as a gateway the Cautious have yet to reach.
Analysis of a policy variable within the survey dataset (N=147) showing no predictive relationship with individual intent to increase AI use, but an association between presence of policy and indicators of organizational adoption/maturity and differential reach into archetype groups.
medium-low null result Developers in the Age of AI: Adoption, Policy, and Diffusion... Individual intent to increase usage; organizational policy presence; organizatio...
Prospective studies are needed to evaluate AI's real-world clinical impact in acute GIB.
Authors' recommendation in the discussion and conclusion based on the predominance of retrospective evidence and few prospective/RCTs.
speculative null result How Do AI-Assisted Diagnostic Tools Impact Clinical Decision... need for prospective evaluation of clinical impact (recommendation)
The study recommends iterative prompt refinement, integration with adaptive learning models, and further exploration of autonomous self-prompting mechanisms.
Concluding recommendations derived from the study's results and interpretation; presented as future directions rather than empirically tested interventions within this study.
speculative null result Prompt Engineering for Autonomous AI Agents: Enhancing Decis... recommendations for methods and research directions (not an empirical outcome me...
Future research should explore sector-specific AI adoption challenges and long-term workforce adaptation strategies.
Author recommendation presented in the paper's discussion/future work section of the summary.
speculative null result Artificial intelligence and organisational transformation: t... N/A (recommended future research topics)
Recommended future research includes scalable interoperability solutions, longitudinal lifecycle value validation, human‑centred adoption strategies, and sustainability assessment methods.
Authors' explicit recommendations at the end of the review based on identified gaps in the literature.
speculative null result Digital Twins Across the Asset Lifecycle: Technical, Organis... priority research areas to address current evidence gaps
Researchers should combine qualitative studies with administrative/matched employer–employee data and experimental/quasi-experimental designs (pilot rollouts, staggered adoption) to identify causal effects of AI on tasks, productivity, and wages.
Methodological recommendation by authors based on limitations of their qualitative study (15 UX designers) and the need to quantify observed phenomena; not an empirical claim tested in the paper.
speculative null result The Values of Value in AI Adoption: Rethinking Efficiency in... recommended measurement approaches for causal identification (task allocation, p...
Recommended research directions: combine neural summary networks with explicit uncertainty modules (e.g., conditional normalizing flows), benchmark against classical econometric estimators, explore transfer learning for pre-trained estimators, and study interpretability and sensitivity to misspecification.
Authors' recommendations based on limitations and implications discussed in the paper; these are forward-looking propositions rather than empirically supported claims.
speculative null result ForwardFlow: Simulation only statistical inference using dee... research agenda items (qualitative recommendations)
Future research priorities include obtaining causal estimates (e.g., field experiments) of productivity gains from trust-mediated AI adoption and conducting cost–benefit analyses of trust-building interventions.
Study’s stated research agenda/recommendations; not an empirical claim but a recommended direction for follow-up research.
speculative null result Algorithmic Trust and Managerial Effectiveness: The Role of ... causal productivity estimates and cost–benefit outcomes (research recommendation...
AI economics should prioritize causal identification of who benefits and who loses when AI is introduced into credit and other financial services, and model endogenous platform behavior including competition and regulatory responses.
Research agenda proposed by the authors based on identified gaps in the literature; prescriptive guidance rather than empirically tested claims.
speculative null result Financial Inclusion in the Age of FinTech Platforms: Opportu... research priorities (causal identification, endogenous platform behavior) rather...
Regulatory tools to consider include algorithmic impact assessments, data portability/interoperability mandates, fairness enforcement, sandboxing with post-deployment audits, and macroprudential tools for platform risk.
Policy recommendation derived from literature review and gap analysis; framed as suggested instruments rather than tested interventions.
speculative null result Financial Inclusion in the Age of FinTech Platforms: Opportu... effectiveness of regulatory tools on consumer protection, competition, and syste...
Key research priorities include improving measurement of AI usage across countries, causal identification of long-run effects, and sectoral reskilling strategy evaluation.
Identified gaps and methodological limitations in the reviewed empirical literature (measurement heterogeneity, limited long-run panels, sectoral variation) motivating suggested future research agenda.
speculative null result S-TCO: A Sustainable Teacher Context Ontology for Educationa... quality and scope of future empirical evidence on AI economic effects
To measure and monitor these effects, researchers should track firm-level adoption of AI features, fulfillment automation intensity, platform-mediated market entry, and task-level labor shifts.
Author recommendations based on gaps identified in the case-based and multi-modal empirical work and the sensitivity of results to adoption measures; not an empirical finding but a methodological claim.
speculative null result Artificial Intelligence–Enabled E-Commerce Systems and Autom... measurement coverage metrics (availability/quality of adoption and task-shift da...
Policy priorities should differ by national Skill Imbalance: countries with strong demand for new skills should prioritize education and reskilling, while countries with strong supply should prioritize firm absorption (innovation, financing, technology adoption).
Interpretation of cross-country Skill Imbalance Index and its implications; prescriptive recommendation based on the observed demand–supply patterns rather than causal testing of policies.
speculative null result Bridging Skill Gaps for the Future Policy emphasis (education/reskilling versus firm absorption) inferred from Skil...
The threshold for taxing AI may be crossed once AI becomes sufficiently capable in substituting humans across cognitive tasks.
Model-based comparative-static/threshold analysis showing that higher AI substitutability for cognitive tasks increases the likelihood that cognitive workers will consider switching to manual jobs, thereby meeting the model's tax-initiation condition.
speculative positive Workers' Incentives and the Optimal Taxation of AI whether/when the model's tax-initiation threshold is crossed as a function of AI...
The results indicate the need to build digital infrastructure, human capital, and support open data.
Policy recommendation provided in the paper based on the empirical findings linking cognitive tools to market opportunities (specific cost–benefit or implementation analyses not provided in the excerpt).
speculative positive Innovative Cognitive Tools for Studying Market Opportunities... policy actions (digital infrastructure, human capital development, open data sup...
Developing domain-specific vernacular NLP and speech models (health, agriculture, education) would help replicate pragmatic features (proverbs, registers) that enable epistemic appropriation.
Policy/research recommendation based on qualitative findings that proverbs and registers confer legitimacy and facilitate knowledge transfer; no experimental NLP work reported in study.
speculative positive From Linguistic Hybridity to Development Sovereignty: Pidgin... potential improvement in vernacular AI-assisted advisory effectiveness (proposed...
Local-language (vernacular) inclusion improves economic returns to development interventions by increasing comprehension and adoption, thereby improving program cost-effectiveness.
Logical extrapolation from observed higher comprehension and adoption rates in the field sample (N = 45); no direct economic cost–benefit analysis reported in the study—claim framed as implication for AI economics.
speculative positive From Linguistic Hybridity to Development Sovereignty: Pidgin... implied economic return / cost-effectiveness (inferred from uptake/comprehension...
Economic and organizational benefits (e.g., cost-effective retention, preserved human capital for environmental innovation) are plausible outcomes of applying the approach, but require further causal and cost analyses.
Paper discusses implications and hypothesizes ROI from reduced turnover (less recruiting/onboarding/productivity loss) and preservation of green capabilities; no empirical cost or productivity data provided in the presented summary.
speculative positive Explainable AI for Employee Retention in Green Human Resourc... organizational outcomes: turnover costs avoided, retained human capital, product...
Findings support regulatory focus on transparency, auditability, and consumer protections because low trust would slow adoption and reduce welfare gains from AI marketing.
Policy implication derived from empirical association between trust and adoption/loyalty in the study; regulatory effects were not empirically tested in the paper.
speculative positive Trust in AI-Driven Marketing and its Impact on Brand Loyalty... Policy relevance (inferred impact on adoption and welfare)
Investments in trustworthy AI systems (privacy, transparency, fairness) can increase retention and customer lifetime value because trust raises loyalty directly and via adoption.
Managerial implication inferred from observed positive direct and indirect effects of Trust on Brand Loyalty in the SEM results; CLV and retention were not directly measured.
speculative positive Trust in AI-Driven Marketing and its Impact on Brand Loyalty... Customer retention / Customer Lifetime Value (inferred, not directly measured)
Firms investing in human–AI co‑creation infrastructure may gain a resilience premium; policymakers and standards bodies should consider governance frameworks for adaptive algorithmic systems balancing responsiveness with oversight.
Policy and investment implication inferred from empirical results on resilience and detection performance; direct evidence of market valuation or policy outcomes is not reported.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... investment returns/resilience premium and policy/governance needs (inferred)
Greater reliance on algorithmic co‑creation shifts labor demand toward roles skilled in model oversight, interpretive judgment, and human‑machine interaction rather than purely manual segmentation tasks.
Inference from the operationalization of human–AI co‑creation via the Canvas and observed changes in practitioner workflows during 6‑month ethnography (n = 23); workforce composition effects are not empirically measured at scale in the study.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... labor and skill composition (shift toward oversight and human–AI interaction ski...
A ~90% reduction in strategic planning cycle time indicates lower managerial coordination costs and faster reallocation of marketing and R&D budgets.
Inference from measured reduction in planning cycle length (~90%) observed in the study (see ethnography/system logs); direct measures of coordination costs and budget reallocation outcomes are not reported in the summary.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... managerial coordination costs and speed of resource reallocation (inferred)
Algorithmic Canvas–enabled autopoietic STP increases firms' ability to adapt endogenously to shocks, implying higher realized productivity in volatile markets and lower deadweight losses from mis‑targeting.
Inference drawn from empirical findings on resilience and detection performance (44% greater resilience, improved signal detection) and theoretical reasoning about dynamic capabilities; productivity and deadweight loss are not directly measured in the reported empirical results.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... firm productivity and welfare effects (inferred)
Economic evaluations of AI adoption should include psychological and human-capital externalities (effects on self-efficacy, skill depreciation, job satisfaction) to fully account for welfare and productivity dynamics.
Argument grounded in experimental and survey findings showing psychological impacts of AI-use mode; general recommendation for research and evaluation rather than an empirical finding.
speculative positive Relying on AI at work reduces self-efficacy, ownership, and ... recommended evaluation scope (inclusion of psychological/human-capital measures)
Building and maintaining an open-access disclosure repository would enable comparability, aggregation, and public appraisal of environmental pressures.
Policy recommendation derived from conceptual analysis; no implemented repository or empirical evaluation reported.
speculative positive A golden opportunity: Corporate sustainability reporting as ... data accessibility, comparability, and ability to aggregate environmental disclo...