The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Information, not experience, shifts public support for government AI: learning about AI markedly changes citizens' attitudes—even overriding prior beliefs and personal experience working under an AI manager—whereas being managed by an AI alters task performance but leaves views on public-sector AI largely unchanged.

The Politics of Using AI in Policy Implementation: Evidence from a Field Experiment
Yotam Margalit, Shir Raviv · Fetched March 15, 2026 · British Journal of Political Science
semantic_scholar rct high evidence 9/10 relevance DOI Source
Randomized evidence shows informational exposure substantially shifts public support for using AI in government, while direct experience being managed by an AI affects workers' task performance but does not change their attitudes toward public-sector AI.

Abstract The use of AI by government agencies in guiding important decisions (for example, on policing, welfare, education) has triggered backlash and demands for greater public input in AI regulation. Yet it remains unclear what such input would reflect: general attitudes towards new technologies, personal experience with AI, or learning about its implications. We study this question experimentally by tracking the attitudes of over 1,500 workers whose task assignments were randomly determined by either a human or an AI ‘boss’, with task content and valence also randomized. Across a three-wave panel, we find that personal experience with AI-as-boss affected workers’ job performance but not their attitudes on using AI in public decision making. In contrast, exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants’ prior disposition or direct experience. The results highlight the promise of incorporating public input into AI governance.

Summary

Main Finding

Exposure to information about AI substantially shifts public attitudes toward using AI in government decision-making, even when that information contradicts people’s prior dispositions or their direct experience working under an AI “boss.” By contrast, personal experience of having an AI rather than a human supervisor changed workers’ task performance but did not change their attitudes about AI in public decision contexts.

Key Points

  • Sample: a panel of over 1,500 workers followed across three waves.
  • Experimental manipulation:
    • Assignment of a supervisor that was randomly determined to be either an AI or a human “boss.”
    • Randomization of task content and task valence.
    • An information exposure treatment about AI (details in paper).
  • Behavioral result: being managed by an AI affected measurable job performance.
  • Attitudinal result: direct experience with an AI manager did not change participants’ views about the appropriateness of AI for public-sector decision-making.
  • Information effect: providing information about AI produced significant changes in attitudes, overriding prior beliefs and direct experience in many cases.
  • Policy-relevant insight: public opinion on AI governance appears more responsive to informational interventions than to individual-level experience.

Data & Methods

  • Design: randomized controlled experiment embedded in a three-wave panel survey/behavioral study.
  • Participants: >1,500 workers (paper contains recruitment details and demographics).
  • Treatments:
    • Supervisor type (AI vs. human) — randomized assignment.
    • Task content and valence — randomized to control for task-specific effects.
    • Information exposure about AI — randomized to measure causal effect of learning.
  • Outcomes measured:
    • Behavioral: job/task performance metrics under the assigned supervisor.
    • Attitudinal: opinions on using AI in public decision-making (e.g., policing, welfare, education).
  • Temporal structure: attitudes and performance tracked across three waves to observe dynamics and persistence.
  • Analysis: causal inference via random assignment; comparisons across treatment arms to isolate effects of experience versus information (specific estimation methods and robustness checks reported in paper).

Implications for AI Economics

  • Preference formation: Attitudes toward public-sector AI are malleable via information, suggesting that models of public preferences should incorporate informational channels rather than treating experience as the primary driver.
  • Governance design: Public input mechanisms and informational campaigns can meaningfully shape support or opposition for AI policies; policymakers should prioritize transparent, targeted information when seeking legitimacy for AI deployment in government.
  • Political economy of adoption: Organizational uptake of AI may not translate into broader public acceptance; adoption driven by efficiency gains could encounter political resistance unless accompanied by effective outreach and education.
  • Cost–benefit and welfare analysis: Evaluations of public AI deployment should account for informational interventions as a policy lever to alter perceived benefits and distributional concerns.
  • Research priorities: Need for further work on what kinds of information (content, framing, source credibility) are most effective, heterogeneity of responses across demographics, and the persistence of information-induced attitude changes.

Assessment

Paper Typerct Evidence Strengthhigh — Large (≫1,500) randomized experiment with multiple independently randomized treatments and repeated measurements across three waves, allowing clear causal attribution of the effects of experience versus information on both behavior and attitudes. Methods Rigorhigh — Randomization of key treatments (supervisor type and information), randomized controls for task content and valence, behavioral and attitudinal outcomes measured over time, and robustness checks reported — all consistent with rigorous experimental practice; remaining concerns are typical (external validity, potential attrition) rather than internal identification. SampleA panel of over 1,500 workers followed across three survey/behavioral waves (recruitment procedures and full demographics reported in the paper); participants performed randomized tasks under experimentally assigned supervisors (AI vs human) and were randomly exposed to informational treatments about AI; outcomes include task performance metrics and attitudes about AI in public-sector decision-making. Themesgovernance human_ai_collab IdentificationRandomized controlled design: participants were randomly assigned to an AI versus human supervisor, task content and valence were randomized, and an information-exposure treatment about AI was randomized; causal effects are identified by between-arm comparisons across the three-wave panel with robustness checks. GeneralizabilitySample may not be nationally representative (depends on recruitment frame) and may not reflect all worker populations., Experimental tasks and simulated AI supervision may not capture the complexity and stakes of real-world workplace AI deployments., Short- to medium-term panel (three waves) limits inference about long-run persistence of attitude or performance effects., Information treatment content, framing, and source in the experiment may differ from real-world media, political, or organizational messaging., Cross-cultural or single-country limitations if sample is drawn from a limited geographic context.

Claims (6)

ClaimDirectionConfidenceOutcomeDetails
Personal experience with an AI 'boss' affected workers' job performance. Team Performance mixed medium workers' job performance (task performance across panel waves)
n=1500
0.6
Personal experience with an AI 'boss' did not affect workers' attitudes on using AI in public decision making. Governance And Regulation null_result medium attitudes toward using AI in public decision making
n=1500
No measurable effect on attitudes about AI in public decision making
0.6
Exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants' prior disposition or direct experience. Governance And Regulation mixed medium change in attitudes toward AI in public decision making after information exposure
n=1500
Information exposure produced statistically significant attitudinal change
0.6
Task content and valence were randomized in the experiment. Other null_result high experimental manipulation variables: task content and task valence
n=1500
Task content and valence randomized
1.0
The study tracked participants in a three-wave panel totaling over 1,500 workers. Other null_result high longitudinal measurements of job performance and attitudes across three waves
n=1500
Three-wave panel, >1,500 workers
1.0
The results highlight the promise of incorporating public input into AI governance. Governance And Regulation positive speculative implication for AI governance: receptiveness to public input after informational interventions
n=1500
0.1

Notes