The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Generative AI can erode its own value: individually rational use drives model collapse that lowers social welfare, with habits spreading harmless-seeming use into high‑value tasks and magnifying damage.

Generative artificial intelligence reduces social welfare through model collapse
Fabian Baumann, Erol Akçay, Joshua B. Plotkin · April 23, 2026 · arXiv (Cornell University)
openalex theoretical n/a evidence 7/10 relevance Source PDF
A theoretical model shows that individually rational reliance on generative AI can cause model collapse that reduces collective welfare—especially for high-value tasks—and habit formation can cause adoption in low-stakes areas to spill over and amplify those welfare losses.

Generative artificial intelligence (genAI) is rapidly reshaping how knowledge and culture are produced and consumed. Yet generative models are vulnerable to model collapse: when trained on data generated by earlier versions of themselves, their outputs can lose diversity and accuracy. This creates a social dilemma, because delegating tasks to genAI can be individually beneficial in the short term even as widespread adoption degrades future model performance. Here we develop a parsimonious model of behavior in collaborative interactions in which individuals can either exert human effort, rely on genAI, or refrain from work altogether. The welfare consequences of genAI are organized by a simple two-dimensional taxonomy: the strength of the incentive to perform the task without AI, and the severity of model collapse. Within this framework, the introduction of genAI -- while initially beneficial at the individual level -- will reduce social welfare for the most important types of tasks. In addition, habit formation around genAI use can couple otherwise separate domains, so that adoption in low-stakes tasks spills over into high-value tasks and amplifies welfare losses. Together, these results identify a general pathway by which, in the absence of intervention, individually rational adoption of genAI will assuredly and profoundly reduce collective welfare.

Summary

Main Finding

Individually rational adoption of generative AI (genAI) can systematically reduce social welfare through a feedback process called model collapse: when models are retrained on AI-generated outputs, output quality and diversity deteriorate as AI usage grows. Using a parsimonious game-theoretic model of pairwise collaboration and imitation dynamics, the paper shows a simple two-dimensional taxonomy (baseline incentive to work × collapse strength) that determines long-run outcomes. Crucially, for high-incentive tasks subject to strong model collapse, introduction of genAI always reduces long-run social welfare—even though AI adoption is individually rational in the short run. Habit formation (using the same strategy across tasks) produces spillovers so AI uptake in low-stakes tasks can force harmful AI use in high-value domains.

Key Points

  • Collaboration game: individuals choose H (human work), N (no work), or AI (delegate to genAI) in pairwise collaborations; payoffs are shared equally.
  • Payoff structure: human work contributes b (benefit) at cost ch; AI contributes a at cost ca with ch > ca and b ≥ a.
  • Model collapse: AI performance a depends on population AI frequency xAI; modeled as a linear decline a(xAI) = b − m xAI (m > 0 measures collapse strength). Results are robust beyond linearity.
  • Two key dimensions determine outcomes:
    • Incentive regime: high-incentive tasks (b > 2 ch) where humans would work absent AI; low-incentive tasks (b < 2 ch) where they would not.
    • Collapse strength: weak (m < mc) vs strong (m > mc), where mc = min(b, 2 ch) − 2 ca.
  • Long-run equilibria (three possible): all-AI; AI mixed with H; AI mixed with N. Which equilibrium obtains depends on the two dimensions above.
  • Welfare effects:
    • Low-incentive tasks (Ia/Ib): AI generally increases social welfare (it enables work that otherwise wouldn’t be done).
    • High-incentive tasks with weak collapse (IIa): welfare change depends on cost savings vs collapse (ΔS = ch − ca − m); could be positive or negative.
    • High-incentive tasks with strong collapse (IIb): theorem — introducing genAI always decreases social welfare (ΔS < 0). Intuition: some human effort persists, but AI adoption degrades model quality enough that the mixed equilibrium is worse than the all-human baseline.
  • Spillover / habit formation: if agents habitually use the same strategy across tasks, beneficial AI adoption in low-stakes domains can force AI use in high-stakes domains and amplify welfare losses (negative spillover region).

Data & Methods

  • Type of study: theoretical, analytic modeling (no empirical estimation).
  • Core model:
    • Finite population (N → ∞ limit) of agents matched in pairs; repeated interactions.
    • Payoff matrix ΠAI with entries determined by contributions and costs (given in paper).
    • Strategy set: H, N, AI. Benefits and costs symmetric across partners; payoff shared equally.
  • Dynamics:
    • Evolutionary imitation dynamics approximated by replicator-type ODEs: ẋs = xs (πs(x) − ϕ(x)), with πs average payoff and ϕ population mean payoff.
    • Stability and equilibrium analysis of these ODEs to identify attractors and basins of attraction.
  • Modeling of collapse: benefit from AI declines with population prevalence a(xAI) = b − m xAI (linear), with m parameterizing collapse severity; authors state robustness to other functional forms.
  • Extensions:
    • Two-task model where agents allocate time between two task-types; strong habit modeled as constraint that agents use the same strategy in both tasks. Analysis shows spillovers under habit constraints.
  • Proofs and analytical derivations provided in supplementary/methods (referenced as (40) in paper).

Implications for AI Economics

  • Welfare trade-offs are dynamic: static productivity gains from genAI can mask long-run collective losses due to degraded training data and model collapse. Cost-savings alone (ca < ch) are not sufficient to guarantee social welfare gains.
  • Policy and institutional responses should prioritize preventing harmful feedback loops that put high-value cultural and informational goods at risk:
    • Data provenance and labeling: prevent widespread retraining on unlabelled AI-generated content (e.g., require provenance metadata, watermarks, or mandatory labeling).
    • Training restrictions: limit use of publicly available AI-generated content in large-scale model training, or maintain curated human-only corpora for sensitive/high-incentive domains.
    • Economic incentives: subsidize human contributions in domains prone to collapse or tax AI-produced content that is fed back into public training corpora.
    • Platform design: encourage hybrid workflows that preserve human-produced signals (human-in-the-loop, provenance-preserving interfaces), and discourage automatic publication pipelines that recycle AI content into training data.
    • Regulatory focus: prioritize protections for high-incentive, culturally salient domains (research literature, journalism, scientific archives, essential decision-making records).
  • Measurement agenda for economists and policymakers:
    • Empirically estimate collapse parameters (m) across domains by tracking model performance as AI prevalence in training corpora increases.
    • Monitor fraction of AI-generated content in public repositories (Wikipedia, news, codebases, scholarly preprints) and its incorporation into training sets.
    • Field and lab experiments on agent behavior: how short-term productivity gains translate into adoption and habit formation across tasks.
    • Cost–benefit analyses that internalize dynamic externalities from retraining-on-AI outputs.
  • Modeling and research extensions:
    • Heterogeneous agents, multi-task allocation with partial habit formation, endogenous provision of training data, market structure (monopoly vs competitive model providers), platform incentives, strategic labeling/watermarking by model providers.
    • Nonlinear collapse functions, networked/clustered interactions, and stochastic training cycles.
  • Bottom line for AI economics: interventions that change incentives around data provenance and the reuse of AI outputs in training are essential. Left unchecked, individually rational use of genAI can create negative externalities that reduce long-run social welfare—especially in high-value domains where preserving human-produced diversity matters most.

Assessment

Paper Typetheoretical Evidence Strengthn/a — The paper presents a formal theoretical model and does not provide empirical or experimental evidence; claims are derived analytically from model assumptions rather than from observed data. Methods Rigormedium — The work offers a clear, parsimonious formal model that isolates plausible mechanisms (individual incentives, model-collapse, habit spillovers) and a two-dimensional taxonomy, which supports internally consistent analytical results; however, it relies on stylized assumptions, lacks empirical calibration or robustness checks against alternative behavioral specifications, and does not model platform- or market-level responses. SampleNo empirical sample; a stylized population of agents facing multiple tasks who can choose between exerting human effort, using generative AI, or shirking, with model performance evolving (degrading) when training data contains AI-generated content and with habit-formation linking adoption across task domains. Themesproductivity adoption human_ai_collab GeneralizabilityAbstract, stylized model without empirical calibration to real-world data, Specific assumptions about the mechanism and severity of model collapse may not match how actual generative models degrade in practice, Ignores heterogeneity across agents, tasks, firms, and industries (e.g., differences in substitutability, monitoring, or complementarities), Does not account for platform incentives, market structure, or regulatory interventions that could mitigate collapse, Simplified treatment of learning, adaptation, and possible corrective actions (data curation, human-in-the-loop, counterfactual training), Static or simplified dynamic setup may miss long-run equilibria or technological progress that offsets collapse

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
Generative artificial intelligence (genAI) is rapidly reshaping how knowledge and culture are produced and consumed. Adoption Rate positive high production and consumption of knowledge and culture
0.12
Generative models are vulnerable to model collapse: when trained on data generated by earlier versions of themselves, their outputs can lose diversity and accuracy. Output Quality negative high output diversity and accuracy
0.12
Delegating tasks to genAI can be individually beneficial in the short term even as widespread adoption degrades future model performance (creating a social dilemma). Developer Productivity mixed high individual short-term benefit vs future model performance (collective welfare)
0.12
We develop a parsimonious model of behavior in collaborative interactions in which individuals can either exert human effort, rely on genAI, or refrain from work altogether. Task Allocation null_result high choice among effort modalities (human effort, genAI reliance, abstention)
0.12
The welfare consequences of genAI can be organized by a two-dimensional taxonomy: the strength of the incentive to perform the task without AI, and the severity of model collapse. Organizational Efficiency null_result high social welfare outcomes as a function of incentive strength and model collapse severity
0.12
The introduction of genAI—while initially beneficial at the individual level—will reduce social welfare for the most important types of tasks. Consumer Welfare negative high social welfare for high-value tasks
0.12
Habit formation around genAI use can couple otherwise separate domains, so that adoption in low-stakes tasks spills over into high-value tasks and amplifies welfare losses. Task Allocation negative high spillover adoption and amplified welfare losses
0.12
In the absence of intervention, individually rational adoption of genAI will assuredly and profoundly reduce collective welfare. Consumer Welfare negative high collective (social) welfare
0.02

Notes