The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2954 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Human Ai Collab Remove filter
Lower barriers to producing design concepts with GenAI could enable more freelancing and entry by non-traditional providers, altering market structure and intensifying competition at the lower end of the value chain.
Speculative implication extrapolated from interview findings and economic reasoning in the paper; not empirically tested within the study.
speculative mixed Human–AI Collaboration in Architectural Design Education: To... market structure / entry and competition dynamics
Demand for designers will likely shift toward individuals combining domain expertise with algorithmic/AI fluency (prompting strategies, tool orchestration), potentially increasing returns to these hybrid skills.
Inference and implication drawn from interview themes about algorithmic thinking and authors' policy/economics discussion; not empirically tested in study.
speculative mixed Human–AI Collaboration in Architectural Design Education: To... labor demand / skill premium for hybrid AI-domain skills
Standard productivity metrics (e.g., output per hour) may misprice value if temporal quality matters; firms will face trade‑offs between maximizing throughput and preserving richer subjective temporality that affects long‑run creativity, morale, and retention.
Conceptual economic reasoning and literature synthesis on attention and productivity; no empirical studies or longitudinal workplace data presented.
speculative mixed XChronos and Conscious Transhumanism: A Philosophical Framew... accuracy of productivity metrics and long‑run organizational outcomes (creativit...
Investors and firms may need to include metrics of experiential quality (subjective well‑being, sustained attention quality) alongside productivity metrics when valuing neurotech and human–AI platforms.
Normative/economic implication argued from the framework; no empirical valuation studies or survey of investor behavior included.
speculative mixed XChronos and Conscious Transhumanism: A Philosophical Framew... incorporation of experiential-quality metrics into firm/investor valuation proce...
Adoption of advanced simulation and AI could affect productivity, returns to capital versus labor, trade and outsourcing patterns, and distributional outcomes, with benefits potentially concentrated among large firms.
Theoretical implications and discussion in the paper's AI economics section; framed as suggested areas for future study rather than empirically established effects.
speculative mixed A Review of Manufacturing Operations Research Integration in... productivity, returns to capital/labor, trade/outsourcing patterns, firm‑ and wo...
Adoption heterogeneity may widen productivity dispersion across firms and contribute to market concentration, since organizations with better data, processes, and training budgets will capture more benefit.
Economic interpretation of literature and survey findings; speculative projection rather than empirical measurement within the study.
speculative mixed Artificial Intelligence as a Catalyst for Innovation in Soft... firm-level productivity dispersion and market concentration (projected, not meas...
Demand for mid-level, routine-focused developer roles could compress while demand rises for verification, security, and AI–human orchestration skills.
Theoretical task-replacement argument based on observed capabilities of LLMs and synthesized user study evidence; limited direct labor-market empirical evidence in the reviewed literature.
speculative mixed ChatGPT as a Tool for Programming Assistance and Code Develo... employment demand by role/skill category; hiring trends and vacancy composition
Routine coding tasks may be partially automated, shifting human labor toward verification, integration, architecture, and domain-specific tasks.
Task-composition studies, user studies showing LLMs handle boilerplate/routine work, and economic inference synthesized across studies.
speculative mixed ChatGPT as a Tool for Programming Assistance and Code Develo... time allocation across task types (routine coding vs. verification/architecture)...
If cognitive interlocks are widely adopted, many negative externalities can be internalized and AI-driven productivity gains can be realized more sustainably; absent such controls, equilibrium may drift toward higher error rates and systemic incidents.
Long-run equilibrium argument based on theoretical reasoning and conditional claims; no longitudinal or cross-firm empirical evidence presented.
speculative mixed Overton Framework v1.0: Cognitive Interlocks for Integrity i... long-run system outcomes (error rates, incident frequency, net productivity) con...
If AI raises the quality and pace of research, social returns to public research funding could increase, but distributional concerns and negative externalities must be managed to realize aggregate welfare gains.
Welfare implication discussed in the paper. Framed as conditional and theoretical; not empirically quantified in the abstract.
speculative mixed Artificial Intelligence for Improving Research Productivity ... social returns to public research (social benefit per funding dollar), distribut...
Policy interventions (data governance, transparency, reproducibility standards, ethical guidelines) will shape adoption and externalities (misinformation, misuse, reproducibility crises).
Policy recommendation/implication stated in the paper. This is a normative and predictive claim grounded in governance literature; the abstract does not present empirical evaluation of specific policies.
speculative mixed Artificial Intelligence for Improving Research Productivity ... policy adoption indicators, measurable externalities (incidence of misuse, repro...
Labor demand effects are ambiguous: junior/entry-level demand may be reduced for some tasks while demand for verification and higher-skill roles may rise.
Economic reasoning, early observational signals, and theoretical task-reallocation frameworks; empirical longitudinal evidence is limited or absent.
speculative mixed ChatGPT as a Tool for Programming Assistance and Code Develo... labor demand by skill level and occupation (employment levels, hiring rates)
The effectiveness of generative AI depends critically on human-AI workflows: prompt design, iterative refinement, and human vetting materially affect outcomes.
Qualitative analyses of interaction patterns and experiments manipulating prompting/iteration showing variation in outcomes; many studies report improved outputs after iterative prompting and human-in-the-loop refinement.
medium-high mixed ChatGPT as an Innovative Tool for Idea Generation and Proble... variation in output quality based on prompt design; changes in output after iter...
CRAEA-style systems could increase household productivity and substitute for some routine in-home human labor, altering demand for certain service roles and increasing demand for higher-skill roles (e.g., maintenance, AI oversight).
Paper's implications/economic analysis and qualitative extrapolation based on observed performance improvements in simulation; no empirical labor-market or deployment data provided to substantiate real-world labor substitution claims.
speculative mixed Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Labor demand shifts (theoretical implication, not empirically measured in the st...
Integrated ERP vendors embedding AI could strengthen vendor lock-in, while interoperable AI layers may foster ecosystems and specialized entrants; empirical work is needed to determine market outcomes.
Conceptual discussion and observed vendor behavior in practitioner literature; explicit statement in the paper that empirical analysis is required.
speculative mixed Integrating Artificial Intelligence and Enterprise Resource ... market-structure outcomes (e.g., vendor concentration, switching costs, entry of...
Market demand is likely to bifurcate: high-value clinical markets will require rigorous explainability and neuroscientific grounding (higher willingness-to-pay), while research and consumer segments may tolerate black-box models (lower margins).
Market segmentation argument built from differing end-user requirements and tolerance for opaque models; presented as a projected implication rather than an empirically tested market study.
speculative mixed Explainable Artificial Intelligence (XAI) for EEG Analysis: ... market segmentation / willingness-to-pay across segments
Persistent declines in self-efficacy after passive AI exposure suggest potential for skill atrophy and slower reversion when tasks must be performed without AI.
Inference from observed persistent reductions in self-efficacy post-return in the experiment; skill atrophy and reversion costs not directly measured—this is an implied consequence.
speculative negative Relying on AI at work reduces self-efficacy, ownership, and ... inferred human-capital outcomes (skill atrophy, reversion costs; not directly me...
Firms that adopt passive, copy-based AI workflows risk psychological costs that could offset short-run productivity gains from AI.
Inference drawn from experimental findings of reduced efficacy/ownership/meaningfulness under passive use and short-term enjoyment gains; not directly tested for firm-level productivity or turnover—extrapolation from individual-level psychological measures.
speculative negative Relying on AI at work reduces self-efficacy, ownership, and ... inferred organizational outcomes (productivity offsets, not directly measured)
Teams often produce evaluation outputs (tests, metrics, user feedback) but lack mechanisms, processes, or technical levers to convert those outputs into actionable engineering or product changes—a novel “results-actionability gap.”
Recurring theme from the 19 practitioner interviews and coding; authors explicitly articulate and label this gap based on participants' reports.
medium-high negative Results-Actionability Gap: Understanding How Practitioners E... ability to translate evaluation outputs into concrete product/engineering change...
The study confirms several previously documented evaluation challenges with LLMs: model unpredictability, metric mismatch, high human-evaluation costs, and difficulty reproducing failures.
Interview data from 19 practitioners; thematic analysis flagged these recurring problems as reported by participants and aligned with prior literature.
medium-high negative Results-Actionability Gap: Understanding How Practitioners E... presence and prevalence of known evaluation challenges
Emergent quality hierarchies among agents imply winner-take-most dynamics in informational value and potential market concentration in agent quality.
Observed formation of quality hierarchies in agent interactions and documented economic interpretation; this is a hypothesis/implication drawn from qualitative patterns rather than measured market outcomes.
speculative negative When Openclaw Agents Learn from Each Other: Insights from Em... distribution of informational value / concentration of agent quality
Rapid deployment of autonomous learners could accelerate displacement in affected sectors and widen inequality if gains concentrate among capital owners or platform providers.
Socioeconomic risk assessment and projection; conceptual and not empirically quantified in the paper.
speculative negative Why AI systems don't learn and what to do about it: Lessons ... displacement rates; inequality measures (e.g., Gini); concentration of gains
Faster, more generalist embodied AI could substitute for routine physical and social tasks, shifting human labor toward oversight, high-level planning, creativity, and flexible social cognition roles.
Labor-market impact hypothesis derived from automation literature; conceptual projection only.
speculative negative Why AI systems don't learn and what to do about it: Lessons ... occupational substitution rates; changes in labor demand composition
If models frequently leak or misuse preferences in third‑party contexts, users and organizations will discount the value of personalization or demand stronger controls, increasing costs for deploying memory features and reducing consumer surplus.
Economic reasoning and implication drawn from the observed misapplication behavior; no empirical user adoption or market data provided in the study to directly support this claim.
speculative negative BenchPreS: A Benchmark for Context-Aware Personalized Prefer... Projected changes in trust, adoption costs, and consumer surplus (not empiricall...
The failure mode (misapplication of preferences to third parties) creates negative externalities (privacy violations, normative harms, misinformation, contractual breaches) that markets and platforms may not internalize without regulation or design changes.
Economic interpretation and argumentation building on the empirical failure mode; these harms are hypothesized implications rather than measured outcomes in the paper.
speculative negative BenchPreS: A Benchmark for Context-Aware Personalized Prefer... Projected negative externalities on third parties (not directly measured in stud...
Widespread adoption of predictive HR tools raises distributional and fairness concerns (algorithmic bias, disparate impacts) and privacy risks that may prompt regulatory responses affecting adoption costs and equilibrium outcomes.
Discussion/implications section raises these risks conceptually; the paper does not empirically measure downstream policy or distributional effects.
speculative negative Adoption of AI-Based HR Analytics and Its Impact on Firm Pro... Potential fairness, privacy, and regulatory impacts (theoretical, not measured)
Unclear liability frameworks increase perceived and real costs and can slow adoption by hospitals and insurers.
Policy analyses and procurement narratives noting liability uncertainty cited as a barrier to procurement and deployment.
medium_high negative Human-AI interaction and collaboration in radiology: from co... time-to-adoption, procurement decisions citing liability concerns, insurance/cov...
Up-front implementation costs commonly include procurement, integration with PACS/EMR, UI/UX development, regulatory compliance, and staff training; recurring costs include monitoring, data labeling, software updates, and cybersecurity.
Implementation reports, vendor and hospital accounts, and qualitative studies documenting cost categories (specific dollar amounts vary across settings and are rarely published in detail).
medium_high negative Human-AI interaction and collaboration in radiology: from co... implementation capital expenditures, annual operating expenditures
Uneven organizational supports can concentrate returns to AI in firms and workers that successfully actualize affordances, potentially widening wage and employment disparities; targeted policy and training investments can mitigate these effects.
Theoretical implication from the framework with policy recommendations; no empirical testing or sample reported in the paper.
speculative negative Revolutionizing Human Resource Development: A Theoretical Fr... wage inequality, employment disparities, concentration of AI returns across firm...
Without continuous support for upskilling/reskilling and inclusive policies, AI risks becoming a source of exclusion rather than an enabler of human advancement.
Normative conclusion derived from reviewed literature and thematic interpretation in the qualitative study (literature-based; evidence is secondary and not quantified).
speculative negative THE IMPACT OF ARTIFICIAL INTELLIGENCE IN THE WORKPLACE: OPPO... social inclusion versus exclusion related to AI adoption
Research literature synthesis demonstrates 70-75% automation potential.
Quantitative estimate offered by the authors (70-75%) as part of function-by-function analysis; no described empirical evaluation or sample supporting the figure.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent automation potential for research literature synthesis
Knowledge transmission (teaching/lecturing) shows 75-80% AI substitutability.
Authors' quantitative estimate presented in the analysis (75-80%); the paper does not detail empirical methods or validation samples for this percentage.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent substitutability/automation potential of knowledge transmission
Administrative tasks face 75-80% disruption risk from AI.
Paper provides a quantitative estimate (75-80%) as part of its functional disruption assessment; no empirical methodology, dataset, or sample size is described to support the numeric range.
speculative negative Are Universities Becoming Obsolete in the Age of Artificial ... percent disruption/substitutability of administrative tasks
Demand-dependent pricing in the modeled energy load management setting creates a social dilemma: everyone would benefit from coordination, but in equilibrium agents often choose to incur congestion costs that cooperative turn-taking would avoid.
Theoretical/modeling analysis of consumer agents scheduling appliance use under demand-dependent pricing as described in the paper (analytical argument and/or model simulations). Specific sample sizes or simulation parameters are not given in the abstract.
medium-high negative Hybrid Human-Agent Social Dilemmas in Energy Markets presence of congestion costs vs coordinated turn-taking (system efficiency/total...
There is a risk of a two‑tier market where high‑quality temporal‑preserving enhancements are costly, increasing inequality in experiential welfare and cognitive capital.
Speculative socioeconomic implication based on cost/access arguments and distributional concerns; no inequality modeling or empirical pricing data provided.
speculative negative XChronos and Conscious Transhumanism: A Philosophical Framew... distributional inequality in access to temporal‑quality enhancements and resulti...
Technical expansion without an accompanying theory of lived temporality risks increasing capabilities while degrading the qualitative depth of human experience (presence, attentional flow, felt meaning).
Argumentative claim supported by philosophical analysis and literature synthesis (neurophenomenology, attention economics); no empirical test reported (N/A).
speculative negative XChronos and Conscious Transhumanism: A Philosophical Framew... qualitative depth of human experience (presence, attentional flow, felt meaning)
Differential access to higher-quality (paid) versus free GenAI tools and differing ability to engage with the tool could widen inequality among students and institutions.
Authors' implication based on student-reported concerns about limitations of free ChatGPT versions and on heterogeneous gains across disciplines; this is a policy/implication claim not directly measured in the experiment.
speculative negative Expanding the lens: multi-institutional evidence on student ... equity/inequality in access and learning outcomes (not directly measured)
High-quality, equitable climate information displays public-good characteristics (nonrival, nonexcludable at scale), so private incentives alone will underprovide geographically representative data and shared infrastructure.
Economic reasoning supported by observed concentration of compute and model development (mapping) and standard public-goods theory; no formal empirical market model estimated in the paper.
medium-high negative The Rise of AI in Weather and Climate Information and its Im... Level of provision of geographically representative data/shared infrastructure u...
Heterogeneous trust levels across firms and schools may produce uneven productivity gains and widen performance gaps.
Logical implication and policy discussion in the paper; the cross-sectional study documents relationships between trust and outcomes but does not provide aggregate diffusion or cross-firm longitudinal evidence to confirm unequal sectoral diffusion.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... distribution of productivity gains / performance gaps across organizations
Overreliance on unvetted AI can propagate biases; economic gains from AI therefore require governance, auditing, and accountability mechanisms.
Framed as a risk and policy recommendation in the discussion; not an empirical finding from the cross-sectional survey reported in the summary.
speculative negative Algorithmic Trust and Managerial Effectiveness: The Role of ... propagation of biases and need for governance/auditing (risk outcomes)
Full replacement of physicians would require breakthroughs in robust generalization, embodied capabilities, and legal/regulatory change—currently lacking.
Conceptual inference based on documented limitations (OOD generalization, lack of embodied/sensorimotor capability, unsettled legal/regulatory environment) summarized in the review.
speculative negative Will AI Replace Physicians in the Near Future? AI Adoption B... feasibility/timeline for physician replacement
Centralized provision of high-quality coding models by a few vendors could produce vendor lock-in and increase platform power in software development inputs.
Market-structure analysis and industry observations synthesized in the paper; the claim is forward-looking and not established by longitudinal market data within the review.
speculative negative ChatGPT as a Tool for Programming Assistance and Code Develo... market concentration measures (e.g., HHI), indicators of vendor lock-in (switchi...
If many firms adopt AI generation without matching verification, aggregate fragility in software-dependent infrastructure could rise, increasing downtime costs and systemic economic risk.
Macro-level risk projection and system fragility argument in the paper; no macroeconomic modeling or empirical scenario analysis provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... aggregate system fragility metrics (downtime, outage frequency/severity), econom...
This reversal of the burden of proof creates moral-hazard-like behavior: incentives for speed reduce verification effort.
Theoretical argument built on the micro-coercion mechanism and economic reasoning; no empirical validation provided.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... verification effort per artifact (e.g., reviewer time), proportion of unchecked ...
Under time pressure, developers adopt an implicit default of accepting plausible machine outputs unless they can disprove them (the 'micro-coercion of speed'), effectively reversing the burden of proof.
Behavioral mechanism posited from descriptive reasoning and thought experiments; no behavioral experiments, surveys, or observational data reported.
speculative negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... developer acceptance rate of machine-generated outputs under time pressure; rate...
DAR dynamics (authority states, hysteresis, safe-exit times) introduce path-dependence and switching costs that should be treated as state variables in production and decision models of human–AI joint work.
Theoretical implications section arguing these elements add path-dependence and switching costs to economic/production models; analytic reasoning, not empirical measurement.
medium-high negative Human–AI Handovers: A Dynamic Authority Reversal Framework f... switching_costs; path_dependence_indicators; effect_on_throughput
Concentration risks exist because high fixed costs for safe integration and model adaptation may favor larger incumbents or platform providers.
Conceptual economic reasoning and practitioner commentary synthesized in the review; no empirical market-structure analysis or sample-based evidence included here.
speculative negative The Effectiveness of ChatGPT in Customer Service and Communi... market concentration indicators and barriers to entry related to AI integration ...
Rich contextual memories and continuous home interaction create valuable data streams that could enable firms to capture substantial value, raising concerns about data governance, consent, and monetization.
Authors' policy and economic implications discussion noting that MMCM-like memories generate valuable data; this is a conceptual/policy claim rather than empirically tested within the study.
speculative negative Context-Rich Adaptive Embodied Agents: Enhancing LLM-Powered... Data generation and value-capture potential (qualitative implication)
Imported AI systems may impose foreign values and norms, risking erosion of indigenous knowledge and social cohesion.
Normative and conceptual argument supported by cited case studies and policy analyses; no original anthropological or sociological fieldwork in the paper.
low-medium negative Towards Responsible Artificial Intelligence Adoption: Emergi... indicators of indigenous knowledge retention, measures of cultural alignment of ...
Deployed AI systems can produce algorithmic bias that harms marginalized groups when models are trained on skewed or non‑representative data.
Synthesis of prior empirical findings and case studies on algorithmic bias and fairness in ML systems; paper does not present new empirical tests.
medium-high negative Towards Responsible Artificial Intelligence Adoption: Emergi... fairness metrics, disparate error rates, incidence of discriminatory outcomes fo...