The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (4560 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
Clear
Productivity Remove filter
Insurers may revise underwriting, raise premiums, or exclude certain AI-related exposures until risk assessments improve; new insurance products may emerge for AI governance failures.
Policy and market impact speculation based on perceived risk; no empirical insurer responses or underwriting data provided.
low mixed Prompt Engineering or Prompt Fraud? Governance Challenges fo... insurer behavior (premiums, coverage terms) and emergence of AI-specific insuran...
Firms will reallocate resources toward AI governance, monitoring tools, and skilled auditors (increasing compliance and labor costs), and demand for products/services (prompt-provenance tools, watermarking, AI forensic services, certified-safe LLMs) will rise.
Market/economic projection based on the identified threat and presumed demand for mitigations; speculative without market-data support in the paper.
low mixed Prompt Engineering or Prompt Fraud? Governance Challenges fo... firm resource allocation (spend on governance/monitoring) and market demand for ...
Policy implication: policymakers seeking to balance openness and security should consider layered, adaptive instruments that can be tuned by sector or actor; economic analysis can help identify where centralized coordination yields scale economies versus where decentralized rights‑based approaches preserve competition and trust.
Normative policy recommendation extrapolated from the paper's comparative findings and theoretical framing; not tested empirically in the paper.
low mixed Balancing openness and security in scientific data governanc... policy design effectiveness (layered/adaptive instruments), trade‑offs between s...
Demand for labor may shift from routine instrument operation and image processing toward higher-level tasks (experiment design, oversight, interpretation), and LLMs may amplify productivity of skilled scientists, potentially increasing wage premia for those who supervise AI-guided workflows.
Labor-economics reasoning and analogy to prior automation effects; no empirical labor-market or wage data presented specific to microscopy.
low mixed ChatMicroscopy: A Perspective Review of Large Language Model... labor demand composition, distribution of wages, skill premium
Principal stratification analysis suggests the training’s effect on scores operated primarily by expanding the set of LLM users (an adoption channel) rather than substantially improving per-user productivity among those who would already use the LLM.
Mechanism decomposition using principal stratification applied to the randomized trial data (n = 164); analysis indicates a larger contribution from the adoption margin than from within-user productivity gains, though estimates have wide confidence intervals.
low mixed Training for Technology: Adoption and Productive Use of Gene... Mechanism components: adoption rate and per-user effectiveness (score conditiona...
Widespread adoption of formal governance could lower systemic risk from enterprise AI failures, whereas heterogeneous adoption may create winners and losers based on governance quality.
Conceptual systems-level argument and comparative-case reasoning; no quantitative systemic-risk modeling or empirical evidence provided.
low mixed Governed Hyperautomation for CRM and ERP: A Reference Patter... systemic risk of enterprise AI failures and competitive market outcomes
Greater automation of routine ERP/CRM tasks will displace some operational roles while increasing demand for governance, oversight, and AI-engineering skills, shifting labor toward higher-skill, higher-wage tasks.
Theoretical labor-market implication derived from the pattern's effects on task automation and governance needs; based on qualitative synthesis, not empirical labor-market analysis.
low mixed Governed Hyperautomation for CRM and ERP: A Reference Patter... changes in labor demand by skill level, displacement of routine roles, increased...
Risk-adjusted total cost of ownership (TCO) may fall if governance prevents costly incidents (e.g., compliance fines, data breaches), despite higher upfront costs.
Conceptual economic argument supported by qualitative examples and best-practice reasoning; no empirical ROI or incident-rate data presented.
low mixed Governed Hyperautomation for CRM and ERP: A Reference Patter... risk-adjusted TCO and incident-related cost savings
Voyage routing remains dominated by heuristic methods.
Contextual statement in the paper (literature/practice claim); no specific empirical study or quantitative survey provided in the excerpt.
low negative Physics-informed offline reinforcement learning eliminates c... prevalence of heuristic methods in operational voyage routing (qualitative claim...
Systemic risks from misaligned optimisation (narrow objectives, externalities) warrant oversight mechanisms (AI steering committees, escalation paths) and potentially sectoral regulation of decision-critical algorithms.
Policy-prescriptive claim based on conceptual identification of optimisation externalities and accountability gaps; no sectoral case studies or empirical risk quantification in the paper.
low negative Comparative analysis of strategic vs. computational thinking... systemic risk exposure and effectiveness of oversight/regulatory mechanisms
The two tail risks (cyber-triggered escalation and loss-of-control) create fat-tailed risk distributions that complicate risk pricing and capital allocation, potentially causing precautionary market behavior (deleveraging, higher liquidity buffers).
Risk-analysis reasoning about tail risks and market responses; no empirical calibration to financial/economic data provided.
low negative Highly Autonomous Cyber-Capable Agents: Anticipating Capabil... changes in financial risk-pricing metrics, capital allocation behavior, and prec...
Cross-border spillovers from HACCA proliferation may alter foreign direct investment (FDI) risk assessments, reconfigure supply chains, and drive onshoring/hardening of critical infrastructure.
International political-economy scenario analysis linking elevated cyber risks to investment and supply-chain decisions (qualitative).
low negative Highly Autonomous Cyber-Capable Agents: Anticipating Capabil... changes in FDI flows, supply-chain configuration, and infrastructure hardening m...
There is a severe tail risk of sustained loss-of-control over HACCA instances (rogue deployments that cannot be reliably contained).
Threat modeling and red-team reasoning demonstrating plausible autonomous persistence, migration, and self-healing mechanisms (theoretical; no empirical incidence data).
low negative Highly Autonomous Cyber-Capable Agents: Anticipating Capabil... probability or extent of uncontrolled, persistent HACCA deployments
There is a severe tail risk that autonomous cyber operations could accidentally escalate into cyber-triggered crises involving nuclear-armed states (misattribution or inadvertent effects on critical systems).
Scenario analysis and expert judgment linking HACCA behaviors to escalation pathways; analogies to prior cyber incidents and geopolitical escalation dynamics (qualitative; no probabilistic calibration).
low negative Highly Autonomous Cyber-Capable Agents: Anticipating Capabil... probability or risk of inadvertent cyber-triggered escalation involving nuclear-...
Measurement friction from the results-actionability gap creates a hidden cost: teams can detect problems but cannot cheaply translate findings into improvements, reducing the speed and ROI of LLM investments.
Authors' implication drawn from interview evidence about the effort required for remediation and lack of direct translation from evaluations to fixes; presented as an economic implication rather than directly measured quantity.
low negative Results-Actionability Gap: Understanding How Practitioners E... inferred effect on ROI and speed of product improvement
If verified, explainable GLAI is priced higher due to compliance costs, access-to-justice gaps may widen as lower-cost but riskier offerings persist or services become more expensive.
Distributional reasoning linking higher compliance costs to price increases and access effects; supported by illustrative examples, no empirical price or access data.
low negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... access-to-justice metrics correlated with pricing of verified vs. unverified GLA...
Routine, unrestrained adoption of GLAI without enforceable mechanisms for effective human review threatens judicial independence and rights protections.
Normative and legal argumentation supported by conceptual analysis and illustrative scenarios. No empirical causal evidence; projection based on theoretical risk pathways.
low negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... level of threat to judicial independence and protection of rights (institutional...
There is a risk of deskilling, especially for trainees receiving reduced diagnostic practice when AI automates routine tasks.
Conceptual arguments supported by qualitative reports and limited observational findings; empirical longitudinal evidence quantifying deskilling is sparse.
low negative Human-AI interaction and collaboration in radiology: from co... trainee diagnostic performance over time, case exposure counts, measures of reta...
Erosion of informal communication and tacit coordination driven by AI integration can create negative externalities on team efficiency that are not captured by short-run metrics.
Derived from interview narratives describing loss of ad hoc communications and tacit knowledge exchange after AI adoption; interpreted as producing costs not reflected in immediate measurable outputs.
low negative AI in project teams: how trust calibration reconfigures team... team efficiency and unmeasured coordination/tacit work
Uneven adoption of symbiarchic HR practices across firms could concentrate productivity gains and rents in firms or occupations that successfully integrate AI while preserving human judgement, potentially widening within‑ and between‑firm inequality.
Projected distributional implication based on economic theory and the paper’s framework; presented as a hypothesis for empirical testing rather than as an observed result.
low negative Symbiarchic leadership: leading integrated human and AI cybe... within‑ and between‑firm inequality; distribution of productivity rents
Demanding oversight of multiple AI agents drives increased task-switching for workers.
Asserted in the paper as part of the mechanism linking AI use to cognitive overload, based on organizational observations and theory; no empirical task-switching frequency or time-use data provided in the excerpt.
low negative When AI Assistance Becomes Cognitive Overload: Understanding... task-switching frequency / oversight burden
Such disjointed strategies cannot manage the systemic socio-economic disruption ahead.
Asserted in abstract as a conclusion/argument; no empirical evaluation described in the abstract.
low negative The DARE framework: a global model for responsible artificia... capacity of current strategies to manage systemic socio-economic disruption
AI threatens to fracture the 20th-century social contract.
Asserted in abstract as a normative/predictive claim; no empirical support described in the abstract.
low negative The DARE framework: a global model for responsible artificia... stability/continuity of the social contract (social cohesion, welfare expectatio...
Mergers are a barrier to economic growth (negative association between mergers and GDP growth).
Model results reported a negative relationship between mergers and GDP growth in the regressions described in the summary; however, the summary does not define how 'mergers' is measured, how widely it was observed across countries, or the statistical significance levels.
low negative The Role of Artificial Intelligence in Economic Growth: Syst... GDP growth (national GDP growth rate)
Unequal GenAI adoption has implications for productivity, skill formation, and economic inequality in an AI-enabled economy.
Interpretation/implication drawn from observed gendered adoption patterns in the 2023–2024 UK survey and literature on technology diffusion and labor-market impacts (no direct empirical measurement of downstream economic effects in the paper).
low negative Women Worry, Men Adopt: How Gendered Perceptions Shape the U... Implied downstream outcomes: productivity, skill formation, economic inequality ...
Preliminary evidence that inappropriate reliance on AI outputs is worse for complex information needs (complex answers).
Post-hoc/stratified analysis in the user study examining the effect of the complexity of the information need on reliance/error-detection; described as preliminary in the paper.
low negative To Believe or Not To Believe: Comparing Supporting Informati... error-detection rate and reliance stratified by complexity of question/answer
AI-driven productivity gains may not translate into broad-based demand if income is concentrated among capital owners, which could dampen aggregate profitability over time.
Theoretical argument grounded in Mandel-like distributional mechanics and demand-driven growth literature; speculative without empirical aggregation tests in the paper.
low negative Economic Waves, Crises and Profitability Dynamics of Enterpr... aggregate demand and aggregate profitability
Concentration of curated datasets and restrictive IP can create monopolistic rents and underprovision of public‑good datasets, implying policy interventions (data sharing incentives/standards) may be required.
Economic reasoning about market formation and data as a scarce asset; no empirical market analysis provided in summary (theoretical implication).
low negative Editorial: Integrating machine learning and AI in biological... Market concentration / data access (conceptual)
More granular and auditable credentials may shift signaling dynamics and risk credential inflation; regulators should monitor credential proliferation and market value.
Conceptual warning in paper (theoretical); no empirical credential-market study included.
low negative Curriculum engineering: organisation, orientation, and manag... number and granularity of credentials issued, employer valuation of credentials,...
Overreliance on GenAI CDS may lead to deskilling of clinicians, eroding judgment over time and increasing systemic vulnerability.
The paper cites theoretical risk and references limited longitudinal concerns; empirical longitudinal studies demonstrating deskilling are scarce per the paper’s stated evidence gaps.
low negative GenAI and clinical decision making in general practice clinician diagnostic skill over time; reliance/override rates; error rates when ...
Commercial structural biology services for routine solved folds may be commoditized, pushing firms toward complex validation, novel targets, or high‑value contract research.
Paper suggests this in 'Disruption of service markets' as a projected industry response; it is a strategic implication rather than an empirically demonstrated trend in the text.
low negative Protein structure prediction powered by artificial intellige... change in demand/pricing for routine structural biology services and shift towar...
Legacy systems and siloed incentives create switching frictions that slow diffusion of AI-enabled ISP; early adopters may achieve sustained cost and service advantages and vendors bundling technology with change management could capture large rents.
Authors' argument informed by case observations of switching costs and vendor roles; no causal market-level evidence provided.
low negative Optimizing integrated supply planning in logistics: Bridging... adoption rate, market concentration, vendor rents
Returns to AI investments may exhibit increasing returns to scale, reinforcing winner‑take‑most dynamics unless offset by platformization or open‑source diffusion.
Economic scenario reasoning on capital intensity and platform effects; no empirical calibration or econometric evidence provided.
low negative How AI Will Transform the Daily Life of a Techie within 5 Ye... return on AI investment by firm size (evidence of increasing returns to scale) a...
Because feedbacks from capital and labor onto AI are weak, AI can grow rapidly and may lead to lock-in, concentration, and distributional risks that warrant monitoring and possible redistributive or competition policies.
Empirical finding of weak negative feedbacks to AI in estimated interaction coefficients combined with theoretical interpretation about growth and lock-in risks.
low negative Governance of Technological Transition: A Predator-Prey Anal... AI capital growth dynamics and potential long-run concentration/lock-in risks (q...
Inadequate protections reduce public trust in mobile-AI services, which can slow diffusion and undercut the growth trajectories that policy narratives anticipate.
Inferred from stakeholder commentary and policy discourse combined with communication-rights theory; the paper does not present survey or adoption-rate data.
low negative Promising Protection, Producing Exposure: AI Ethics and Mobi... public trust in mobile‑AI; adoption/diffusion rates
Low-wage and platform workers are particularly exposed to algorithmic management and surveillance, with potential downward pressure on wages, bargaining power, and job quality.
The paper's qualitative analysis of stakeholder comments and policy omissions, combined with literature-based inference about platform labor dynamics; no primary labor-market survey or quantitative wage data provided.
low negative Promising Protection, Producing Exposure: AI Ethics and Mobi... worker exposure to algorithmic management; wages; bargaining power; job quality
Soft‑law governance and growth-first narratives risk concentrating benefits (investment, productivity gains) while externalizing costs (privacy harms, biased decisioning) onto vulnerable populations, exacerbating inequality and reducing inclusive economic development.
Analytic inference from qualitative review of governance instruments and policy narratives combined with communications-ecology and political-economy reasoning; not based on quantitative economic measurement in the paper.
low negative Promising Protection, Producing Exposure: AI Ethics and Mobi... distribution of benefits and costs; inequality; inclusiveness of economic develo...
Legal liability and cyber-insurance markets will need to adapt as machine-generated code becomes pervasive, with pricing internalizing risk from inadequate verification processes.
Speculative legal/economic implication discussed in the paper; no actuarial or legal-case data provided.
low negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... insurance pricing changes; liability claims tied to machine-generated code
Individual developers or firms may underinvest in verification because defect accumulation imposes external costs on downstream actors, creating market failures that can justify standards, certifications, or regulation mandating interlocks or minimum verification practices.
Policy and market-failure argument based on externalities presented conceptually; no modeling or empirical evidence of such externalities provided.
low negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... degree of underinvestment in verification; incidence of downstream costs/externa...
Short-run productivity gains from generative AI may be offset by longer-run increases in maintenance, security breaches, and reliability costs if verification lags.
Economic reasoning and forward-looking implications discussed in the paper; no empirical cost-benefit or longitudinal data presented.
low negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... net productivity over time; maintenance/security costs versus short-term product...
Small, unverified errors, insecure patterns, and brittle interactions accumulate over time (latent accumulation), increasing operational fragility and long-run maintenance costs.
Theoretical argument and illustrative examples in the paper; no longitudinal defect accumulation studies or empirical cost analysis provided.
low negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... rate of latent defect accumulation; long-run maintenance and reliability costs
Time pressure and productivity incentives lead developers to accept plausible AI outputs without full validation, a behavioral/institutional failure mode called the 'micro-coercion of speed' that effectively reverses the burden of proof.
Behavioral diagnosis and incentive analysis presented conceptually in the paper; no behavioral experiments, surveys, or observational data reported.
low negative Overton Framework v1.0: Cognitive Interlocks for Integrity i... developer acceptance rate of AI outputs without full validation / shift in burde...
Hallucination and error risk introduce potential liabilities in client engagements and may change contracting, insurance, and pricing practices in consulting services.
Derived from practitioner concerns reported in interviews and authors' normative discussion; no contractual or insurance-market data presented.
low negative Where Automation Meets Augmentation: Balancing the Double-Ed... liability exposure; contracting/insurance practices; pricing adjustments
Effective deployment requires governance, verification processes, and liability management to manage hallucination risk, creating adoption costs that may advantage larger firms and affect market concentration and pricing power.
Argument based on interviews about necessary organizational safeguards and the resource requirements to implement them; speculative market-structure implications are not empirically tested in the paper.
low negative Where Automation Meets Augmentation: Balancing the Double-Ed... adoption costs; firm-level resource burden; changes in market concentration/pric...
Widespread GenAI use may accelerate skill obsolescence for routine competencies and increase the premium on monitoring, critical evaluation, and AI‑integration skills, shifting investment toward retraining and upskilling.
Projection based on qualitative interviews and the authors' economic interpretation of TGAIF; no longitudinal or wage/skill data provided.
low negative Where Automation Meets Augmentation: Balancing the Double-Ed... skill obsolescence rates; demand for monitoring/evaluation/AI-integration skills...
Uncertainty about long-run agentic behavior increases option value and downside risk of investing in agentic systems, which may raise discount rates and required returns.
Economic argument applying risk/return logic to agentic uncertainty; no quantitative empirical evidence provided.
low negative Visioning Human-Agentic AI Teaming: Continuity, Tension, and... investment valuation metrics (discount rates, required returns) for agentic syst...
Economic rents and advantages may accrue to agents who control large datasets, computing resources, and organizational processes that effectively integrate AI as a co-pilot, potentially increasing market concentration among AI providers.
Economic theory on scale economies and platform effects combined with observed industry patterns; reviewed literature provides conceptual arguments and case examples rather than broad empirical market-structure measurement.
low negative ChatGPT as an Innovative Tool for Idea Generation and Proble... market concentration measures; returns to data/compute ownership (not fully meas...
Generative AI poses substitution risk for entry-level or routine cognitive work focused on generation or drafting without evaluative responsibility.
Task-based analyses and case studies indicating automation potential for routine generation tasks; empirical demonstrations of AI-produced drafts/outputs that could replace such work, but longer-run displacement evidence is limited.
low negative ChatGPT as an Innovative Tool for Idea Generation and Proble... task automatability; employment/demand for routine-generation roles (largely unm...
Upfront integration and recurring governance costs mean smaller firms may face higher relative costs — potentially increasing scale advantages for larger incumbents.
Deployment case studies and cost reports indicating significant fixed integration and governance costs; inference to market structure is speculative.
low negative The Effectiveness of ChatGPT in Customer Service and Communi... relative upfront and ongoing costs; indicators of scale advantages or market con...
There is a risk of deskilling through excessive reliance on AI, implying a need for continuous training and certification to preserve human judgment.
Qualitative interview evidence and observed concerns about overreliance; authors recommend training/governance based on identified risks; no direct longitudinal measurement of deskilling provided in summary.
low negative Human-AI Synergy in Financial Decision-Making: Exploring Tru... human skill levels (deskilling risk); need for training/certification