Information, not experience, shifts public support for government AI: learning about AI markedly changes citizens' attitudes—even overriding prior beliefs and personal experience working under an AI manager—whereas being managed by an AI alters task performance but leaves views on public-sector AI largely unchanged.
Abstract The use of AI by government agencies in guiding important decisions (for example, on policing, welfare, education) has triggered backlash and demands for greater public input in AI regulation. Yet it remains unclear what such input would reflect: general attitudes towards new technologies, personal experience with AI, or learning about its implications. We study this question experimentally by tracking the attitudes of over 1,500 workers whose task assignments were randomly determined by either a human or an AI ‘boss’, with task content and valence also randomized. Across a three-wave panel, we find that personal experience with AI-as-boss affected workers’ job performance but not their attitudes on using AI in public decision making. In contrast, exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants’ prior disposition or direct experience. The results highlight the promise of incorporating public input into AI governance.
Summary
Main Finding
Exposure to information about AI substantially shifts public attitudes toward using AI in government decision-making, even when that information contradicts people’s prior dispositions or their direct experience working under an AI “boss.” By contrast, personal experience of having an AI rather than a human supervisor changed workers’ task performance but did not change their attitudes about AI in public decision contexts.
Key Points
- Sample: a panel of over 1,500 workers followed across three waves.
- Experimental manipulation:
- Assignment of a supervisor that was randomly determined to be either an AI or a human “boss.”
- Randomization of task content and task valence.
- An information exposure treatment about AI (details in paper).
- Behavioral result: being managed by an AI affected measurable job performance.
- Attitudinal result: direct experience with an AI manager did not change participants’ views about the appropriateness of AI for public-sector decision-making.
- Information effect: providing information about AI produced significant changes in attitudes, overriding prior beliefs and direct experience in many cases.
- Policy-relevant insight: public opinion on AI governance appears more responsive to informational interventions than to individual-level experience.
Data & Methods
- Design: randomized controlled experiment embedded in a three-wave panel survey/behavioral study.
- Participants: >1,500 workers (paper contains recruitment details and demographics).
- Treatments:
- Supervisor type (AI vs. human) — randomized assignment.
- Task content and valence — randomized to control for task-specific effects.
- Information exposure about AI — randomized to measure causal effect of learning.
- Outcomes measured:
- Behavioral: job/task performance metrics under the assigned supervisor.
- Attitudinal: opinions on using AI in public decision-making (e.g., policing, welfare, education).
- Temporal structure: attitudes and performance tracked across three waves to observe dynamics and persistence.
- Analysis: causal inference via random assignment; comparisons across treatment arms to isolate effects of experience versus information (specific estimation methods and robustness checks reported in paper).
Implications for AI Economics
- Preference formation: Attitudes toward public-sector AI are malleable via information, suggesting that models of public preferences should incorporate informational channels rather than treating experience as the primary driver.
- Governance design: Public input mechanisms and informational campaigns can meaningfully shape support or opposition for AI policies; policymakers should prioritize transparent, targeted information when seeking legitimacy for AI deployment in government.
- Political economy of adoption: Organizational uptake of AI may not translate into broader public acceptance; adoption driven by efficiency gains could encounter political resistance unless accompanied by effective outreach and education.
- Cost–benefit and welfare analysis: Evaluations of public AI deployment should account for informational interventions as a policy lever to alter perceived benefits and distributional concerns.
- Research priorities: Need for further work on what kinds of information (content, framing, source credibility) are most effective, heterogeneity of responses across demographics, and the persistence of information-induced attitude changes.
Assessment
Claims (6)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Personal experience with an AI 'boss' affected workers' job performance. Team Performance | mixed | medium | workers' job performance (task performance across panel waves) |
n=1500
0.6
|
| Personal experience with an AI 'boss' did not affect workers' attitudes on using AI in public decision making. Governance And Regulation | null_result | medium | attitudes toward using AI in public decision making |
n=1500
No measurable effect on attitudes about AI in public decision making
0.6
|
| Exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants' prior disposition or direct experience. Governance And Regulation | mixed | medium | change in attitudes toward AI in public decision making after information exposure |
n=1500
Information exposure produced statistically significant attitudinal change
0.6
|
| Task content and valence were randomized in the experiment. Other | null_result | high | experimental manipulation variables: task content and task valence |
n=1500
Task content and valence randomized
1.0
|
| The study tracked participants in a three-wave panel totaling over 1,500 workers. Other | null_result | high | longitudinal measurements of job performance and attitudes across three waves |
n=1500
Three-wave panel, >1,500 workers
1.0
|
| The results highlight the promise of incorporating public input into AI governance. Governance And Regulation | positive | speculative | implication for AI governance: receptiveness to public input after informational interventions |
n=1500
0.1
|