The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (3224 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Labor Markets Remove filter
Administrative tasks face 75-80% disruption risk from AI.
Paper provides a quantitative estimate (75-80%) as part of its functional disruption assessment; no empirical methodology, dataset, or sample size is described to support the numeric range.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent disruption/substitutability of administrative tasks
Over 400,000 [individuals] are projected to die before obtaining permanent residency.
Mortality projection applied to the estimated backlog and projected wait times (authors' projection); exact demographic assumptions (age distribution, mortality rates) and method are not provided in the excerpt.
speculative negative The United States' Employment-Based Immigration System: An... Number of backlog applicants projected to die before receiving green cards
The remaining difference (roughly 70%) is not explained by the factors observed in the data, indicating additional influences not captured in the survey.
Residual (unexplained) component from decomposition analyses on ESJS data.
medium-high negative Squandered skills? Bridging the digital gender skills gap fo... Unexplained share (%) of the gender gap in advanced digital task use
The United States shows a more market-driven (firm-dominated) patenting profile and comparatively weaker integration between AI and robotics patent trajectories.
Country-level and actor-type decomposition for U.S. patent filings (1980–2019), showing higher firm share of patents and weaker long-run association/cointegration between core AI and AI-enhanced robotics series compared with China (as reported in the paper).
medium-high negative The "Gold Rush" in AI and Robotics Patenting Activity. Do in... share of patents by firms in U.S.; strength of long-run integration between U.S....
Improving photorealism with objective color-fidelity metrics and refinement reduces the need for manual color correction and retouching in downstream workflows.
Paper and summary argue this as an implication: higher-fidelity outputs from CFR/CFM reduce manual editing demand. This is an economic/market implication rather than a directly evidenced experimental result in the paper (no labor-market causal study reported).
speculative negative Too Vivid to Be Real? Benchmarking and Calibrating Generativ... demand for manual color correction / retouching services
If FDI brings capital‑intensive, AI‑enabled production without complementary upskilling, it may exacerbate wage inequality and deepen labor market dualism in SSA.
Theoretical inference and analogy from documented patterns of skill‑biased technological change and FDI-driven inequality in the reviewed literature; empirical evidence specific to AI in SSA is lacking in the review.
speculative negative Foreign Direct Investment, Labor Markets, and Income Distrib... wage inequality, labor market dualism, employment composition
Full replacement of physicians would require breakthroughs in robust generalization, embodied capabilities, and legal/regulatory change—currently lacking.
Conceptual inference based on documented limitations (OOD generalization, lack of embodied/sensorimotor capability, unsettled legal/regulatory environment) summarized in the review.
speculative negative Will AI Replace Physicians in the Near Future? AI Adoption B... feasibility/timeline for physician replacement
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
speculative negative ChatGPT as a Tool for Programming Assistance and Code Develo... market concentration measures (e.g., HHI), indicators of vendor lock-in (switchi...
This reversal of the burden of proof creates moral-hazard-like behavior: incentives for speed reduce verification effort.
Theoretical argument built on the micro-coercion mechanism and economic reasoning; no empirical validation provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... verification effort per artifact (e.g., reviewer time), proportion of unchecked ...
Under time pressure, developers adopt an implicit default of accepting plausible machine outputs unless they can disprove them (the 'micro-coercion of speed'), effectively reversing the burden of proof.
Behavioral mechanism posited from descriptive reasoning and thought experiments; no behavioral experiments, surveys, or observational data reported.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... developer acceptance rate of machine-generated outputs under time pressure; rate...
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
speculative negative The Effectiveness of ChatGPT in Customer Service and Communi... market concentration indicators and barriers to entry related to AI integration ...
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
low-medium negative Towards Responsible Artificial Intelligence Adoption: Emergi... indicators of indigenous knowledge retention, measures of cultural alignment of ...
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
medium-high negative Towards Responsible Artificial Intelligence Adoption: Emergi... fairness metrics, disparate error rates, incidence of discriminatory outcomes fo...
Human reviewers may over-trust machine-generated language and explanations (automation bias), reducing the likelihood of detecting fraudulent outputs.
Reference to automation-bias literature and conceptual examples; threat modeling and illustrative vignettes in the article.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... detection rate of fraudulent outputs by human reviewers when outputs are machine...
Existing internal audit and compliance frameworks focus on access, transaction, and system controls, not on content-generation integrity.
Literature and standards review combined with threat-control mapping demonstrating gaps in content/provenance coverage.
medium-high negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... coverage of content-generation integrity within existing audit/compliance framew...
Using calibrated, employee-level predictions enables marginal-cost analyses and prioritization (micro-targeting) to improve retention-efficiency versus uniform, across-the-board policies.
Methodological argument: calibrated individual probabilities plus counterfactual impact estimates enable ranking employees by expected gain from interventions and thus marginal-cost prioritization (no empirical cost–benefit calculations provided).
speculative null result Explainable AI for Employee Retention in Green Human Resourc... potential efficiency gains in retention resource allocation (theoretical outcome...
There are research opportunities to measure returns to 'teaching' (causal impact of configuring agents on human skill accumulation and earnings) and to model agent-platform ecosystems with network effects, spillovers, and endogenous quality hierarchies.
Author-stated research agenda and proposed empirical questions derived from the observed phenomena; not empirical results but recommended directions.
speculative null result When Openclaw Agents Learn from Each Other: Insights from Em... need for future causal estimates of returns to teaching and formal models of eco...
Regulators and payers will require clinical validation, safety guarantees, and clear liability frameworks for human–AI shared decision-making before widescale deployment.
Policy implication stated in the paper's discussion section based on general regulatory considerations; not an empirical result from the study.
speculative null result Hierarchical Reinforcement Learning Based Human-AI Online Di... regulatory requirements / safety validation (anticipated, not measured)
Future research should explore sector-specific AI adoption challenges and long-term workforce adaptation strategies.
Author recommendation presented in the paper's discussion/future work section of the summary.
speculative null result Artificial intelligence and organisational transformation: t... N/A (recommended future research topics)
Key research priorities include improving measurement of AI usage across countries, causal identification of long-run effects, and sectoral reskilling strategy evaluation.
Identified gaps and methodological limitations in the reviewed empirical literature (measurement heterogeneity, limited long-run panels, sectoral variation) motivating suggested future research agenda.
speculative null result S-TCO: A Sustainable Teacher Context Ontology for Educationa... quality and scope of future empirical evidence on AI economic effects
To measure and monitor these effects, researchers should track firm-level adoption of AI features, fulfillment automation intensity, platform-mediated market entry, and task-level labor shifts.
Author recommendations based on gaps identified in the case-based and multi-modal empirical work and the sensitivity of results to adoption measures; not an empirical finding but a methodological claim.
speculative null result Artificial Intelligence–Enabled E-Commerce Systems and Autom... measurement coverage metrics (availability/quality of adoption and task-shift da...
Policy priorities should differ by national Skill Imbalance: countries with strong demand for new skills should prioritize education and reskilling, while countries with strong supply should prioritize firm absorption (innovation, financing, technology adoption).
Interpretation of cross-country Skill Imbalance Index and its implications; prescriptive recommendation based on the observed demand–supply patterns rather than causal testing of policies.
speculative null result Bridging Skill Gaps for the Future Policy emphasis (education/reskilling versus firm absorption) inferred from Skil...
The threshold for taxing AI may be crossed once AI becomes sufficiently capable in substituting humans across cognitive tasks.
Model-based comparative-static/threshold analysis showing that higher AI substitutability for cognitive tasks increases the likelihood that cognitive workers will consider switching to manual jobs, thereby meeting the model's tax-initiation condition.
speculative positive Workers' Incentives and the Optimal Taxation of AI whether/when the model's tax-initiation threshold is crossed as a function of AI...
Economic and organizational benefits (e.g., cost-effective retention, preserved human capital for environmental innovation) are plausible outcomes of applying the approach, but require further causal and cost analyses.
Paper discusses implications and hypothesizes ROI from reduced turnover (less recruiting/onboarding/productivity loss) and preservation of green capabilities; no empirical cost or productivity data provided in the presented summary.
speculative positive Explainable AI for Employee Retention in Green Human Resourc... organizational outcomes: turnover costs avoided, retained human capital, product...
Realizing net societal gains from AI requires human-centered design, regulatory and control measures, and integration of sustainability indicators into technological development.
Normative conclusion drawn from the narrative review of interdisciplinary evidence and policy recommendations; not an empirically validated claim within this paper.
speculative positive The Evolution and Societal Impact of Artificial Intelligence... net societal welfare/benefits conditional on governance, design, and sustainabil...
If banks operationalize NLP for personalization and acquisition at scale, this could increase differentiation, raise switching costs, and potentially affect market concentration—warranting antitrust monitoring.
Theoretical implication extrapolated from identified capability gaps and economic reasoning about differentiation, switching costs, and scaling advantages; not empirically tested in the reviewed papers.
speculative positive Natural language processing in bank marketing: a systematic ... market structure indicators (differentiation, switching costs, market concentrat...
Limited applied research on NLP for acquisition and personalization implies unrealized value in banking: NLP could enable more efficient, targeted customer acquisition and cross‑sell, potentially lowering customer‑acquisition cost (CAC) and increasing lifetime value (LTV).
Inference drawn from observed topical gaps (low article counts on acquisition/personalization) and standard marketing economics linking targeting/personalization to CAC and LTV; no direct causal evidence provided in the reviewed literature.
speculative positive Natural language processing in bank marketing: a systematic ... customer‑acquisition cost (CAC), customer lifetime value (LTV), acquisition effi...
A concrete empirical test recommended by the paper is to run controlled comparisons of distribution-shift generalization between negative-only, preference-only, and hybrid-trained models across safety and usefulness metrics.
Methodological recommendation given in the paper; it is not an empirical result but an explicitly proposed verifiable experiment for future work.
speculative positive Via Negativa for AI Alignment: Why Negative Constraints Are ... relative generalization performance (safety and usefulness) under distribution s...
Regulators could feasibly focus on certifying constraint datasets and testing model adherence to explicit prohibitions, since constraint compliance is empirically testable and verifiable.
Policy recommendation derived from the paper's epistemic argument about constraints being verifiable; presented as a plausible regulatory strategy rather than one already validated by policy experiments.
speculative positive Via Negativa for AI Alignment: Why Negative Constraints Are ... feasibility and effectiveness of regulatory certification schemes for constraint...
There is a commercial opportunity for startups and vendors to specialize in 'constraint datasets' and constitutional-rule libraries as tradable assets.
Market/economic inference made from the technical claim that constraints are verifiable and reusable; no empirical industry survey data provided—this is a forward-looking implication.
speculative positive Via Negativa for AI Alignment: Why Negative Constraints Are ... emergence and market size of firms/products supplying constraint datasets and ru...
If negative/safety-focused signals are more sample- and compute-efficient for certain alignment goals, firms may reallocate labeling budgets away from costly preference elicitation toward collecting high-quality negative examples and rule sets.
Economic implication extrapolated from the paper's sample-efficiency claim; the paper reasons from technical sample-efficiency arguments and cited empirical parity but does not present market-level empirical data.
speculative positive Via Negativa for AI Alignment: Why Negative Constraints Are ... organizational allocation of labeling budget and labor-hours (shift in proportio...
Improved alignment can reduce harms from misinterpretation (incorrect decisions, misinformation), lowering downstream liability and reputational risk for vendors and customers.
Paper's safety and externalities discussion argues this as a likely consequence; the claim is theoretical and not supported by empirical incident data in the paper.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... error/externality rates, number of downstream incidents, liability/claims metric...
Providers may charge a premium for alignment-enabled API tiers or incorporate C.A.P. into enterprise plans because of additional compute per interaction, affecting pricing and unit economics.
Paper's pricing and costs discussion predicts potential monetization strategies and pricing experiments (A/B pricing, willingness-to-pay studies) but does not report market data.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... price differentials for alignment features, willingness-to-pay, revenue per user
C.A.P. has potential economic effects: it can reduce time lost to misinterpretation, thereby increasing effective throughput and productivity, though net gains depend on trade-offs with pre-processing overhead.
Economic implications section provides conceptual cost–benefit arguments and recommends pilot measurements (time saved, reduced human review cost) but provides no empirical economic measurement.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... time saved per session, throughput, reduction in correction cycles, net producti...
C.A.P. shifts interactions from one-way command-execution to two-way, partnership-style collaboration, increasing perceived partnerliness.
Theoretical argument drawing on cognitive science and Common Ground theory and proposed human-evaluation measures (satisfaction, perceived collaboration); no empirical human-subject results reported.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... perceived collaboration / user satisfaction / partnerliness ratings
C.A.P. improves long-term and dynamic dialogue alignment and reduces off-topic or mechanically incorrect responses.
Main argument of the paper based on the combined functions (expansion, weighted retrieval, alignment verification, clarification); the paper provides conceptual/theoretical justification but does not report large-scale empirical results.
speculative positive A Context Alignment Pre-processor for Enhancing the Coherenc... dialogue alignment metrics, off-topic response rate, correctness of responses
The benchmark provides a testbed useful for studying strategic behavior, coordination failures, and market-like interactions among agents, which can inform economic research and policy.
Paper claims the benchmark's multi-agent, strategic tasks can be used as experimental environments for economic and policy research; this is a normative claim supported by the benchmark's design rather than by empirical studies in the paper.
speculative positive The PokeAgent Challenge: Competitive and Long-Context Learni... utility of benchmark as a research/testbed for studying strategic/multi-agent ph...
Open-source orchestration lowers entry barriers, broadening participation and potentially compressing rents that would otherwise accrue to well-resourced incumbents.
Paper's discussion section argues that releasing orchestration and evaluation tools publicly reduces the technical overhead for entrants; this is a theoretical/observational claim rather than empirically measured in the paper.
speculative positive The PokeAgent Challenge: Competitive and Long-Context Learni... predicted change in barrier-to-entry and market rents (qualitative)
The clear performance gaps indicate high returns to specialized efforts (RL, domain-specific engineering) relative to generalist LLM-only approaches, shaping where teams invest labor and compute.
Paper links benchmarking results (performance gaps between baselines and humans) to economic implications, arguing specialization yields higher returns; this is an interpretive claim based on reported performance differentials.
speculative positive The PokeAgent Challenge: Competitive and Long-Context Learni... economic return on investment inference based on performance differences between...
Benchmarks like PokeAgent will reallocate researcher and industry attention toward multi-agent, partial-observability, and long-horizon planning problems—likely increasing funding and compute investment in RL and hybrid LLM+RL methods.
Paper offers an economic/implication analysis arguing that introducing such a benchmark changes incentives and investment patterns; this is a reasoned projection rather than an empirical observation.
speculative positive The PokeAgent Challenge: Competitive and Long-Context Learni... predicted shifts in researcher/industry attention and investment (qualitative fo...
Public investment in open environments, robotics testbeds, and safety research can reduce concentration risks and externalities and democratize access to embodied AI research.
Policy recommendation based on anticipated strategic importance of shared infrastructure; not empirically validated here.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... accessibility of research infrastructure; distribution of research capabilities ...
Value in the AI ecosystem may shift from passive text/image corpora toward rich interaction datasets and simulated/real environments; ownership and control of simulation platforms and testbeds could become strategically important assets.
Economic and strategic inference from the proposed technical emphasis on embodied/interaction learning; no supporting market data in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... asset valuations for simulation/testbed providers; transaction volumes for inter...
Increased sample efficiency and transfer will reduce compute and data costs, lowering barriers to entry for firms and broadening feasible AI applications.
Economic argument connecting technical metrics to cost and market effects; not empirically demonstrated in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... compute/data cost per task; market entry rates for firms
More autonomous learners that can self-experiment and learn from observation will lower deployment costs for adaptable agents and accelerate automation across more occupations, especially embodied and social tasks.
Economic reasoning and projection based on expected technical improvements; speculative without empirical economic analysis in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... cost of deploying adaptable agents; rate of automation adoption across occupatio...
Cross-cutting elements (hierarchical organization, curriculum/bootstrapping, intrinsic motivation, uncertainty estimation, memory consolidation, neuromodulatory analogs) are important for improving learning in the proposed architecture.
Conceptual recommendation based on known mechanisms from neuroscience and machine learning literature; not validated in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... improvements in sample efficiency, robustness, transfer when these elements are ...
System M (meta-control) should generate internal signals that decide when to prioritize A vs B, allocate attention, consolidate memory, and trade off uncertainty, novelty, expected information value, and effort costs.
Design proposal motivated by biological meta-control and decision theories; no empirical tests presented.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... accuracy/effectiveness of switching decisions; overall learning efficiency when ...
System B (action-driven learning) should learn through intervention, consequences, and trial-and-error, using active exploration, reinforcement learning, and hierarchical/skill learning.
Architectural proposal aligning with RL and hierarchical learning literature; theoretical description without experimental evidence.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... efficacy of skills learned through action (task success rates; learning speed fr...
System A (observation-driven learning) should build models of others, social contingencies, and passive affordances through imitation, self-supervised representation learning, and inverse RL.
Architectural specification and mapping to existing algorithms (imitation, SSL, inverse RL); no empirical validation provided.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... quality of models learned from observation; accuracy of inferred social continge...
Integrating observation-driven and action-driven learning with meta-control and evolutionary/developmental priors should improve sample efficiency, robustness, transfer, and lifelong adaptation.
Conceptual argument and proposed integration of methods; suggested but untested experimentally in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency; robustness to distribution shift; cross-domain transfer; life...
A biologically inspired three-part architecture (System A: observation-driven learning; System B: action-driven learning; System M: internally generated meta-control) can address these limitations.
Theoretical proposal and analogy to biological systems; no empirical validation reported in the paper.
speculative positive Why AI systems don't learn and what to do about it: Lessons ... sample efficiency; robustness; transfer; lifelong adaptation