The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Fear of AI is reshaping welfare politics: across OECD publics, anxiety about automation fuels support for measures that protect jobs from machines—such as robot taxes—more than for expanded unemployment benefits or retraining programs.

AI, the Future of Work, and the Politics of the Welfare State
Juliana Chueri · April 28, 2026 · Perspectives on Politics
openalex correlational low evidence 7/10 relevance DOI Source PDF
Using the 2024 OECD 'Risks that Matter' survey, the paper finds widespread fear of AI automation across education levels that is associated with stronger public demand for policies that preserve the social role of work (e.g., robot taxes) rather than for traditional unemployment benefits or retraining.

ABSTRACT Advancements in artificial intelligence (AI) pose a profound challenge to the world of work. While the precise consequences remain uncertain, there is growing consensus that we are entering an era marked by widespread labor market insecurities. Existing welfare states are ill-equipped to manage such disruptions: most social benefits remain grounded in work-based eligibility and emphasize rapid reintegration into the labor market. Meanwhile, training systems are still predicated on the idea that technology demands higher skill levels, an assumption increasingly challenged by the rise of AI, which now threatens even high-skill occupations. This paper examines how AI’s labor market impact will transform welfare state politics, arguing that AI-driven automation marks the beginning of a new political era—one in which the role of work in society becomes a central axis of welfare conflict. Drawing on emerging public opinion data from the 2024 OECD Risks that Matter survey, the paper finds that fear of AI automation is widespread and cuts across educational groups. However, rather than increasing support for traditional interventions such as unemployment benefits and training programs, these fears primarily drive demand for measures that preserve the social role of work and protect it from automation, such as robot taxes, and, to a lesser extent, for schemes that guarantee income regardless of employment status. These results suggest the need for a new research agenda that treats AI not only as an economic disruptor but as a trigger for a fundamental shift in welfare politics. Future research should examine how political actors, interest groups, and welfare institutions respond to the emerging conflict over the future of work, and whether the welfare state can be reimagined in a world where work is no longer guaranteed.

Summary

Main Finding

AI-driven automation is producing broad, cross-educational fears about job loss that are already reshaping public welfare preferences. Rather than simply boosting demand for conventional compensatory or retraining policies, these fears are driving heightened support for two alternative policy directions: (1) measures that preserve the social role of work (e.g., robot/adoption taxes, use-regulation, job guarantees) and (2) measures that reduce dependence on paid employment (e.g., UBI, negative income tax, decoupling benefits from work). Survey evidence (OECD Risks that Matter, 2024) shows that robot taxes receive particularly strong and robust support among those who feel threatened by AI, while support for retraining declines; in countries with more generous welfare states, the link between automation fear and support for expanding many policies is weaker (a negative feedback), except that robot-tax support remains high.

Key Points

  • Scope and novelty: AI (especially generative models) threatens nonroutine and high-skill tasks, expanding perceived automation risk beyond traditionally vulnerable occupational groups.
  • Welfare-state mismatch: Contemporary welfare systems are organized around work (work-based eligibility, activation, and skill-formation premised on skill-biased technological change) and may be ill-suited to AI-driven disruptions that can substitute for high-skill labor.
  • Public preferences:
    • Fear of AI-driven job loss cuts across educational groups (not only low-skilled workers).
    • Those who fear automation are more likely to support job-preserving interventions (robot/adoption taxes, tighter regulation of AI deployment, employment guarantees).
    • Fear of automation is positively but weakly associated with support for traditional unemployment benefits and negatively associated with support for retraining programs.
    • Support for decommodifying schemes (UBI, negative income tax, loosening conditionalities) is rising among vulnerable groups but remains mediated by cultural views about work and deservingness.
  • Cross-country dynamics: In countries with more generous welfare provisions, individuals who fear AI are less likely to demand expansions of many policies (negative feedback), but support for robot taxes among the threatened remains high regardless of existing generosity.
  • Political consequence: The paper theorizes a new axis of distributive conflict centered on the legitimacy and role of paid work as the primary mechanism of distributing income, rights, and social meaning.

Data & Methods

  • Primary data source: 2024 OECD “Risks that Matter” multicountry survey — the first broad cross-country public-opinion dataset explicitly measuring attitudes about AI automation and policy responses.
  • Empirical approach: Analysis of survey responses linking self-reported fear of AI-driven job loss to preferences over a set of policies (unemployment benefits, retraining programs, robot/adoption taxes, UBI and related decommodifying reforms). The paper uses cross-country comparisons to examine heterogeneity by welfare-state generosity.
  • Findings are presented as observed associations (public-opinion correlations) rather than causal estimates; results are interpreted in light of historical welfare-state transformations and contemporary political-economy theory.
  • Limitations (noted or implied): early-stage opinion formation after rapid AI developments; cross-sectional survey evidence limits causal inference; attitudes may evolve as AI’s labor impacts materialize and political debates crystallize.

Implications for AI Economics

  • New political-economy axis for modeling: AI economics should explicitly model the emerging political conflict over the role of work — not just labor demand/supply — because policy responses (robot taxes, UBI, job guarantees, regulation of AI adoption) will be shaped by public preferences and political coalitions.
  • Policy design & incidence:
    • Robot/adoption taxes: Economics research must analyze optimal bases (capital vs. use), incidence (who bears the cost: firms, consumers, labor), and dynamic effects on automation incentives, productivity, and firm strategy.
    • UBI and negative income tax: Model interactions with labor supply, skill acquisition, human capital accumulation, and macro demand, as well as heterogeneous distributional impacts across skill groups.
    • Employment guarantees and regulation: Evaluate macro and micro effects on wages, crowding out, sectoral composition, and the incentives for firms to adopt or circumvent regulation.
  • Rethink human-capital policies: Given negative association between automation fear and support for retraining, economists should reassess the presumed efficacy and political viability of retraining-heavy strategies. Research should estimate retraining returns in AI-augmented workplaces and conditions under which retraining is effective.
  • Welfare-state feedbacks: Incorporate how existing welfare generosity conditions public support for additional reforms (the negative-feedback pattern), which can alter policy trajectories and timing.
  • Research agenda suggestions:
    • Empirical: longitudinal and panel surveys to track attitude dynamics; field experiments testing framing effects (preserve-work vs. decommodification frames); firm-level studies of adoption responses to tax/regulatory changes.
    • Theory/modeling: DSGE or general-equilibrium models embedding political constraints and endogenous policy choice; models of automation adoption with Pigouvian/usage taxes and redistributive transfers.
    • Evaluation: cost–benefit and distributional accounting of combined packages (e.g., robot tax funding UBI or retraining), including general-equilibrium labor-market adjustments.
  • Strategic implication for policy: Policymakers should expect competing public demands — some favor preserving employment and restricting automation, others favor income decoupling from work — and design packages that address both political coalitions and economic trade-offs (e.g., targeted transition assistance, conditional/use-limited automation levies, or hybrid schemes combining employment subsidies with guaranteed minimum incomes).

Summary takeaway: AI is not only an economic disruptor of tasks and jobs but is already reshaping welfare politics. AI economists need to integrate political preference dynamics and new policy instruments (robot taxes, employment guarantees, decommodifying income schemes) into models and evaluations to inform feasible, effective responses.

Assessment

Paper Typecorrelational Evidence Strengthlow — Findings are based on cross-sectional public-opinion survey associations (2024 OECD 'Risks that Matter') without a research design that isolates causal effects; results show correlations that could reflect reverse causation, confounding or measurement effects rather than causal impact of AI on policy preferences. Methods Rigormedium — Uses a reputable, large-scale OECD survey which likely provides nationally representative samples and standard questionnaire design, but the abstract reports observational analysis only (no natural experiment, instrument, or panel), and details on controls, robustness checks, or model specifications are not provided. SampleEmerging public-opinion data from the 2024 OECD 'Risks that Matter' survey covering respondents in OECD countries (national samples, 2024), measuring fear of AI automation, sociodemographic characteristics (including education), and stated policy preferences (e.g., robot taxes, income guarantees, unemployment benefits, training). Exact country coverage, sample sizes, and weighting procedures are not reported in the abstract. Themesgovernance labor_markets GeneralizabilityLimited to populations covered by the 2024 OECD survey (primarily OECD member countries); may not generalize to low-income or non-OECD contexts, Cross-sectional attitudes measured at one point in time — susceptible to short-term salience or media effects, Self-reported fears and stated policy preferences may diverge from revealed preferences or voting/behavioral outcomes, Potential cultural and institutional heterogeneity across countries may limit pooled inferences if not fully accounted for

Claims (7)

ClaimDirectionConfidenceOutcomeDetails
Fear of AI automation is widespread and cuts across educational groups. Worker Satisfaction negative high public fear of AI automation
0.3
Rather than increasing support for traditional interventions such as unemployment benefits and training programs, these fears primarily drive demand for measures that preserve the social role of work and protect it from automation, such as robot taxes. Governance And Regulation positive high public support for policies that protect the social role of work (e.g., robot taxes)
0.3
To a lesser extent, fears of AI automation drive demand for schemes that guarantee income regardless of employment status. Governance And Regulation positive high public support for income-guarantee schemes (e.g., universal basic income)
0.3
Fears of AI automation do not primarily increase support for traditional interventions such as unemployment benefits and training programs. Governance And Regulation null_result high public support for unemployment benefits and training programs
0.3
Existing welfare states are ill-equipped to manage AI-driven disruptions: most social benefits remain grounded in work-based eligibility and emphasize rapid reintegration into the labor market. Governance And Regulation negative high design of social benefits (work-based eligibility and reintegration emphasis)
0.3
Training systems are still predicated on the idea that technology demands higher skill levels, an assumption increasingly challenged by the rise of AI, which now threatens even high-skill occupations. Skill Obsolescence negative high skill demand assumptions of training systems and exposure of high-skill occupations to AI-driven automation
0.3
AI-driven automation marks the beginning of a new political era—one in which the role of work in society becomes a central axis of welfare conflict. Governance And Regulation mixed high political salience of 'the role of work' in welfare politics / emergence of new welfare conflict axis
0.05

Notes