The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4793 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Productivity Remove filter
Technical expansion without an accompanying theory of lived temporality risks increasing capabilities while degrading the qualitative depth of human experience (presence, attentional flow, felt meaning).
Argumentative claim supported by philosophical analysis and literature synthesis (neurophenomenology, attention economics); no empirical test reported (N/A).
speculative negative XChronos and Conscious Transhumanism: A Philosophical Framew... qualitative depth of human experience (presence, attentional flow, felt meaning)
Differential access to higher-quality (paid) versus free GenAI tools and differing ability to engage with the tool could widen inequality among students and institutions.
Authors' implication based on student-reported concerns about limitations of free ChatGPT versions and on heterogeneous gains across disciplines; this is a policy/implication claim not directly measured in the experiment.
speculative negative Expanding the lens: multi-institutional evidence on student ... equity/inequality in access and learning outcomes (not directly measured)
High-quality, equitable climate information displays public-good characteristics (nonrival, nonexcludable at scale), so private incentives alone will underprovide geographically representative data and shared infrastructure.
Economic reasoning supported by observed concentration of compute and model development (mapping) and standard public-goods theory; no formal empirical market model estimated in the paper.
medium-high negative The Rise of AI in Weather and Climate Information and its Im... Level of provision of geographically representative data/shared infrastructure u...
Heterogeneous trust levels across firms and schools may produce uneven productivity gains and widen performance gaps.
Logical implication and policy discussion in the paper; the cross-sectional study documents relationships between trust and outcomes but does not provide aggregate diffusion or cross-firm longitudinal evidence to confirm unequal sectoral diffusion.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... distribution of productivity gains / performance gaps across organizations
Overreliance on unvetted AI can propagate biases; economic gains from AI therefore require governance, auditing, and accountability mechanisms.
Framed as a risk and policy recommendation in the discussion; not an empirical finding from the cross-sectional survey reported in the summary.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... propagation of biases and need for governance/auditing (risk outcomes)
Full replacement of physicians would require breakthroughs in robust generalization, embodied capabilities, and legal/regulatory change—currently lacking.
Conceptual inference based on documented limitations (OOD generalization, lack of embodied/sensorimotor capability, unsettled legal/regulatory environment) summarized in the review.
speculative negative Will AI Replace Physicians in the Near Future? AI Adoption B... feasibility/timeline for physician replacement
Emerging agentic/AGI capabilities introduce new failure modes and governance challenges that standard ML oversight may not cover.
Emerging literature, theoretical analyses, and expert opinion summarized in the synthesis; authors note limited empirical long-term data and characterize this as an emergent risk.
speculative negative Framework for Government Policy on Agentic and Generative AI... governance risk / novel failure modes
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
speculative negative ChatGPT as a Tool for Programming Assistance and Code Develo... market concentration measures (e.g., HHI), indicators of vendor lock-in (switchi...
If many firms adopt AI generation without matching verification, aggregate fragility in software-dependent infrastructure could rise, increasing downtime costs and systemic economic risk.
Macro-level risk projection and system fragility argument in the paper; no macroeconomic modeling or empirical scenario analysis provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... aggregate system fragility metrics (downtime, outage frequency/severity), econom...
This reversal of the burden of proof creates moral-hazard-like behavior: incentives for speed reduce verification effort.
Theoretical argument built on the micro-coercion mechanism and economic reasoning; no empirical validation provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... verification effort per artifact (e.g., reviewer time), proportion of unchecked ...
Under time pressure, developers adopt an implicit default of accepting plausible machine outputs unless they can disprove them (the 'micro-coercion of speed'), effectively reversing the burden of proof.
Behavioral mechanism posited from descriptive reasoning and thought experiments; no behavioral experiments, surveys, or observational data reported.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... developer acceptance rate of machine-generated outputs under time pressure; rate...
DAR dynamics (authority states, hysteresis, safe-exit times) introduce path-dependence and switching costs that should be treated as state variables in production and decision models of human–AI joint work.
Theoretical implications section arguing these elements add path-dependence and switching costs to economic/production models; analytic reasoning, not empirical measurement.
medium-high negative Human–AI Handovers: A Dynamic Authority Reversal Framework f... switching_costs; path_dependence_indicators; effect_on_throughput
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
speculative negative The Effectiveness of ChatGPT in Customer Service and Communi... market concentration indicators and barriers to entry related to AI integration ...
Rich contextual memories and continuous home interaction create valuable data streams that could enable firms to capture substantial value, raising concerns about data governance, consent, and monetization.
Authors' policy and economic implications discussion noting that MMCM-like memories generate valuable data; this is a conceptual/policy claim rather than empirically tested within the study.
speculative negative Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Data generation and value-capture potential (qualitative implication)
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
low-medium negative Towards Responsible Artificial Intelligence Adoption: Emergi... indicators of indigenous knowledge retention, measures of cultural alignment of ...
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
medium-high negative Towards Responsible Artificial Intelligence Adoption: Emergi... fairness metrics, disparate error rates, incidence of discriminatory outcomes fo...
Human reviewers may over-trust machine-generated language and explanations (automation bias), reducing the likelihood of detecting fraudulent outputs.
Reference to automation-bias literature and conceptual examples; threat modeling and illustrative vignettes in the article.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... detection rate of fraudulent outputs by human reviewers when outputs are machine...
Existing internal audit and compliance frameworks focus on access, transaction, and system controls, not on content-generation integrity.
Literature and standards review combined with threat-control mapping demonstrating gaps in content/provenance coverage.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... coverage of content-generation integrity within existing audit/compliance framew...
Using calibrated, employee-level predictions enables marginal-cost analyses and prioritization (micro-targeting) to improve retention-efficiency versus uniform, across-the-board policies.
Methodological argument: calibrated individual probabilities plus counterfactual impact estimates enable ranking employees by expected gain from interventions and thus marginal-cost prioritization (no empirical cost–benefit calculations provided).
speculative null result Explainable AI for Employee Retention in Green Human Resourc... potential efficiency gains in retention resource allocation (theoretical outcome...
Recommended research priorities include hierarchical/temporal-decomposition methods, continual learning, robust adaptation to non-stationarity, and causal/structured reasoning to handle multi-factor interactions.
Paper discussion linking observed failure modes to methodological gaps and proposing research directions to address limitations; these are recommendations rather than experimentally validated claims.
speculative null result RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... suggested research directions to improve robustness (proposed, not empirically v...
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
speculative null result Hierarchical Reinforcement Learning Based Human-AI Online Di... regulatory requirements / safety validation (anticipated, not measured)
Empirical economics research should use firm-level and pipeline microdata and quasi-experimental designs to estimate causal effects of AI adoption on outcomes like time-to-hit, preclinical attrition, IND filings, and NME approvals per R&D dollar.
Research recommendation offered in the paper based on identified gaps; not an evidence claim but an explicit methodological suggestion.
speculative null result Learning from the successes and failures of early artificial... recommended empirical outcomes to be measured: time-to-hit, preclinical attritio...
Policy does not predict individuals' intent to increase usage but functions as a marker of maturity—formalizing successful diffusion by Enthusiasts while acting as a gateway the Cautious have yet to reach.
Analysis of a policy variable within the survey dataset (N=147) showing no predictive relationship with individual intent to increase AI use, but an association between presence of policy and indicators of organizational adoption/maturity and differential reach into archetype groups.
medium-low null result Developers in the Age of AI: Adoption, Policy, and Diffusion... Individual intent to increase usage; organizational policy presence; organizatio...
Prospective studies are needed to evaluate AI's real-world clinical impact in acute GIB.
Authors' recommendation in the discussion and conclusion based on the predominance of retrospective evidence and few prospective/RCTs.
speculative null result How Do AI-Assisted Diagnostic Tools Impact Clinical Decision... need for prospective evaluation of clinical impact (recommendation)
The study recommends iterative prompt refinement, integration with adaptive learning models, and further exploration of autonomous self-prompting mechanisms.
Concluding recommendations derived from the study's results and interpretation; presented as future directions rather than empirically tested interventions within this study.
speculative null result Prompt Engineering for Autonomous AI Agents: Enhancing Decis... recommendations for methods and research directions (not an empirical outcome me...
Recommended future research includes scalable interoperability solutions, longitudinal lifecycle value validation, human‑centred adoption strategies, and sustainability assessment methods.
Authors' explicit recommendations at the end of the review based on identified gaps in the literature.
speculative null result Digital Twins Across the Asset Lifecycle: Technical, Organis... priority research areas to address current evidence gaps
Future research priorities include obtaining causal estimates (e.g., field experiments) of productivity gains from trust-mediated AI adoption and conducting cost–benefit analyses of trust-building interventions.
Study’s stated research agenda/recommendations; not an empirical claim but a recommended direction for follow-up research.
speculative null result Algorithmic Trust and Managerial Effectiveness: The Role of ... causal productivity estimates and cost–benefit outcomes (research recommendation...
Key research priorities include improving measurement of AI usage across countries, causal identification of long-run effects, and sectoral reskilling strategy evaluation.
Identified gaps and methodological limitations in the reviewed empirical literature (measurement heterogeneity, limited long-run panels, sectoral variation) motivating suggested future research agenda.
speculative null result S-TCO: A Sustainable Teacher Context Ontology for Educationa... quality and scope of future empirical evidence on AI economic effects
To measure and monitor these effects, researchers should track firm-level adoption of AI features, fulfillment automation intensity, platform-mediated market entry, and task-level labor shifts.
Author recommendations based on gaps identified in the case-based and multi-modal empirical work and the sensitivity of results to adoption measures; not an empirical finding but a methodological claim.
speculative null result Artificial Intelligence–Enabled E-Commerce Systems and Autom... measurement coverage metrics (availability/quality of adoption and task-shift da...
The threshold for taxing AI may be crossed once AI becomes sufficiently capable in substituting humans across cognitive tasks.
Model-based comparative-static/threshold analysis showing that higher AI substitutability for cognitive tasks increases the likelihood that cognitive workers will consider switching to manual jobs, thereby meeting the model's tax-initiation condition.
speculative positive Workers' Incentives and the Optimal Taxation of AI whether/when the model's tax-initiation threshold is crossed as a function of AI...
Economic and organizational benefits (e.g., cost-effective retention, preserved human capital for environmental innovation) are plausible outcomes of applying the approach, but require further causal and cost analyses.
Paper discusses implications and hypothesizes ROI from reduced turnover (less recruiting/onboarding/productivity loss) and preservation of green capabilities; no empirical cost or productivity data provided in the presented summary.
speculative positive Explainable AI for Employee Retention in Green Human Resourc... organizational outcomes: turnover costs avoided, retained human capital, product...
Firms investing in human–AI co‑creation infrastructure may gain a resilience premium; policymakers and standards bodies should consider governance frameworks for adaptive algorithmic systems balancing responsiveness with oversight.
Policy and investment implication inferred from empirical results on resilience and detection performance; direct evidence of market valuation or policy outcomes is not reported.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... investment returns/resilience premium and policy/governance needs (inferred)
Greater reliance on algorithmic co‑creation shifts labor demand toward roles skilled in model oversight, interpretive judgment, and human‑machine interaction rather than purely manual segmentation tasks.
Inference from the operationalization of human–AI co‑creation via the Canvas and observed changes in practitioner workflows during 6‑month ethnography (n = 23); workforce composition effects are not empirically measured at scale in the study.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... labor and skill composition (shift toward oversight and human–AI interaction ski...
A ~90% reduction in strategic planning cycle time indicates lower managerial coordination costs and faster reallocation of marketing and R&D budgets.
Inference from measured reduction in planning cycle length (~90%) observed in the study (see ethnography/system logs); direct measures of coordination costs and budget reallocation outcomes are not reported in the summary.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... managerial coordination costs and speed of resource reallocation (inferred)
Algorithmic Canvas–enabled autopoietic STP increases firms' ability to adapt endogenously to shocks, implying higher realized productivity in volatile markets and lower deadweight losses from mis‑targeting.
Inference drawn from empirical findings on resilience and detection performance (44% greater resilience, improved signal detection) and theoretical reasoning about dynamic capabilities; productivity and deadweight loss are not directly measured in the reported empirical results.
speculative positive The Algorithmic Canvas: On the Autopoietic Redefinition of S... firm productivity and welfare effects (inferred)
Economic evaluations of AI adoption should include psychological and human-capital externalities (effects on self-efficacy, skill depreciation, job satisfaction) to fully account for welfare and productivity dynamics.
Argument grounded in experimental and survey findings showing psychological impacts of AI-use mode; general recommendation for research and evaluation rather than an empirical finding.
speculative positive Relying on AI at work reduces self-efficacy, ownership, and ... recommended evaluation scope (inclusion of psychological/human-capital measures)
Realizing net societal gains from AI requires human-centered design, regulatory and control measures, and integration of sustainability indicators into technological development.
Normative conclusion drawn from the narrative review of interdisciplinary evidence and policy recommendations; not an empirically validated claim within this paper.
speculative positive The Evolution and Societal Impact of Artificial Intelligence... net societal welfare/benefits conditional on governance, design, and sustainabil...
If banks operationalize NLP for personalization and acquisition at scale, this could increase differentiation, raise switching costs, and potentially affect market concentration—warranting antitrust monitoring.
Theoretical implication extrapolated from identified capability gaps and economic reasoning about differentiation, switching costs, and scaling advantages; not empirically tested in the reviewed papers.
speculative positive Natural language processing in bank marketing: a systematic ... market structure indicators (differentiation, switching costs, market concentrat...
Limited applied research on NLP for acquisition and personalization implies unrealized value in banking: NLP could enable more efficient, targeted customer acquisition and cross‑sell, potentially lowering customer‑acquisition cost (CAC) and increasing lifetime value (LTV).
Inference drawn from observed topical gaps (low article counts on acquisition/personalization) and standard marketing economics linking targeting/personalization to CAC and LTV; no direct causal evidence provided in the reviewed literature.
speculative positive Natural language processing in bank marketing: a systematic ... customer‑acquisition cost (CAC), customer lifetime value (LTV), acquisition effi...
Multilateral coordination is needed to set baseline principles (data flows, privacy, AI safety, competition rules) to reduce regulatory fragmentation.
Scenario-based reasoning and policy prescription grounded in theoretical analysis of fragmentation costs; normative recommendation rather than empirical proof.
speculative positive Path Analysis of Digital Economy and Reconstruction of Inter... regulatory coherence / reduction in cross-border regulatory barriers
Research and funding priorities should reweight toward symbolic/structured knowledge, verification, curricula design, and orchestration algorithms rather than exclusive emphasis on model scale.
Prescriptive recommendation based on the conceptual advantages claimed for DSS; not supported by empirical policy or funding analysis within the paper.
speculative positive An Alternative Trajectory for Generative AI research funding allocations, publication trends, and development of tooling for...
Smaller, verifiable DSS agents are easier to audit and align per domain, potentially reducing systemic risks associated with large opaque generalist models.
Argumentative claim about auditability and verifiability of compact, domain-specific systems versus large generalists; no empirical auditability studies are provided.
speculative positive An Alternative Trajectory for Generative AI auditability metrics (time/cost to audit, interpretability scores), alignment fa...
DSS reduces environmental externalities (e.g., emissions, water use) relative to continued monolithic scaling and may reduce regulatory pressure tied to those externalities.
Theoretical claim tying reduced inference energy and decentralized deployment to lower environmental impacts; the paper suggests measuring emissions and water use but supplies no empirical measurements.
speculative positive An Alternative Trajectory for Generative AI emissions (CO2e), water consumption for cooling, regulatory compliance incidents...
Specialization enables many niche DSS providers rather than a small number of dominant monolithic providers, thereby lowering entry barriers for vertical experts.
Market-structure argument based on modularization and domain-focused offerings; no empirical market analysis or simulation is provided.
speculative positive An Alternative Trajectory for Generative AI market concentration (e.g., Herfindahl index), number of active providers per do...
Shifting to DSS changes the cost structure of AI: it lowers recurring OPEX per user by reducing inference energy and enabling local/device processing instead of centralized, inference-heavy cloud services.
Economic reasoning and proposed modeling approaches (capex/opex comparisons) described conceptually; no empirical economic model outputs or market data are included.
speculative positive An Alternative Trajectory for Generative AI OPEX per user, total cost of ownership, cost-per-task under DSS versus monolithi...
DSS societies can achieve much lower inference energy per task and enable easier on-device/edge deployment compared to monolithic LLM deployments.
Argument that smaller, domain-focused models require fewer compute resources and thus lower energy and are better suited to edge hardware; empirical measurements to support this claim are proposed but not supplied.
speculative positive An Alternative Trajectory for Generative AI energy per inference, feasibility of on-device deployment (latency, memory footp...
Architecturally, replacing single giant generalists with 'societies' of small, specialized DSS models routed by orchestration agents yields operational benefits (routing to experts, modular upgrades, specialization).
Conceptual architectural proposal describing specialized back-ends and orchestration/routing agents; the paper outlines recommended experiments but reports no empirical orchestration benchmarks.
speculative positive An Alternative Trajectory for Generative AI end-to-end task success rate, routing efficiency, orchestration overhead, modula...
A more sustainable and effective trajectory is to build domain-specific superintelligences (DSS) grounded in explicit symbolic abstractions (knowledge graphs, ontologies, formal logic) and trained via synthetic curricula so compact models can learn robust, domain-level reasoning.
Prescriptive proposal based on theoretical arguments about the benefits of symbolic abstractions, compact model training, and synthetic curricula; no experimental validation or empirical comparison is provided in the paper.
speculative positive An Alternative Trajectory for Generative AI domain-level reasoning robustness of compact DSS models (task accuracy, generali...
Improved alignment can reduce harms from misinterpretation (incorrect decisions, misinformation), lowering downstream liability and reputational risk for vendors and customers.
Paper's safety and externalities discussion argues this as a likely consequence; the claim is theoretical and not supported by empirical incident data in the paper.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... error/externality rates, number of downstream incidents, liability/claims metric...
Providers may charge a premium for alignment-enabled API tiers or incorporate C.A.P. into enterprise plans because of additional compute per interaction, affecting pricing and unit economics.
Paper's pricing and costs discussion predicts potential monetization strategies and pricing experiments (A/B pricing, willingness-to-pay studies) but does not report market data.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... price differentials for alignment features, willingness-to-pay, revenue per user