Information, not experience, shifts public support for government AI: learning about AI markedly changes citizens' attitudes—even overriding prior beliefs and personal experience working under an AI manager—whereas being managed by an AI alters task performance but leaves views on public-sector AI largely unchanged.
Abstract The use of AI by government agencies in guiding important decisions (for example, on policing, welfare, education) has triggered backlash and demands for greater public input in AI regulation. Yet it remains unclear what such input would reflect: general attitudes towards new technologies, personal experience with AI, or learning about its implications. We study this question experimentally by tracking the attitudes of over 1,500 workers whose task assignments were randomly determined by either a human or an AI ‘boss’, with task content and valence also randomized. Across a three-wave panel, we find that personal experience with AI-as-boss affected workers’ job performance but not their attitudes on using AI in public decision making. In contrast, exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants’ prior disposition or direct experience. The results highlight the promise of incorporating public input into AI governance.
Summary
Main Finding
Personal experience working for an AI (vs a human) changed worker behavior on the job (performance, time investment, willingness to work) but did not change support for using AI in public policy. In contrast, brief exposure to informational content about AI’s societal implications produced significant and persistent changes in attitudes toward AI in government decision making — even when that information contradicted prior dispositions or direct experience.
Key Points
- Design: A randomized field experiment on Amazon Mechanical Turk (MTurk) with a three-wave panel tracked workers’ attitudes toward algorithmic decision making in public policy.
- Treatment factors (factorial):
- Decision-maker identity: algorithmic “boss” vs human HR decision-maker.
- Experience valence: assigned to a preferred (positive) vs less-preferred (negative) task.
- Task content (information exposure): tasks embedded either positive information about AI, negative information about AI, or placebo content (fashion).
- Sample: 2,375 MTurk workers invited; the paper reports tracking attitudes of over 1,500 workers across the three-wave panel (pre-treatment baseline + two post-treatment waves).
- Outcomes:
- Political attitudes: support for using predictive algorithms instead of humans in various public policy domains.
- Behavioral: job performance quality, time spent, willingness to continue work.
- Main empirical results:
- No detectable effect of being hired/managed by an algorithm on support for AI in public policy.
- Being managed by an algorithm did affect on-the-job behavior (better/worse performance, changes in time investment/willingness to work depending on context).
- Exposure to information about AI’s positive or negative societal implications caused directional attitude change consistent with the information, and these changes persisted for days after exposure.
- Information-induced updating occurred even when it conflicted with prior attitudes or personal experience (i.e., updating was not limited to partisan or motivated confirmation).
- Interpretation: Public attitudes toward algorithmic governance are not fixed or solely driven by idiosyncratic personal encounters; people update attitudes in response to substantive information.
Data & Methods
- Setting: Amazon Mechanical Turk (MTurk) online labor market; chosen because ADM is commonly used in hiring/task allocation and gives a realistic, high-frequency interaction environment.
- Sample & timeline:
- Pre-treatment baseline survey collected attitudes and task preferences (February 2023).
- Randomized assignment to experimental conditions (algorithm vs human decision maker; preferred vs non-preferred task; task content valence).
- Task: classifying expert comments; content manipulated to deliver positive, negative, or placebo information about AI.
- Two follow-up surveys administered after task completion (panel of three waves total) by an ostensibly different requester to measure post-treatment attitudes.
- Identification:
- Randomized factorial design isolates causal effects of (a) algorithmic management, (b) nature of experience, and (c) information exposure.
- Outcomes measured both attitudinally (support for AI in policy domains) and behaviorally (task performance metrics).
- Robustness:
- Authors report persistence of information effects across days.
- They emphasize randomization solves selection bias that plagues observational studies of AI exposure.
- Limitations noted by authors (implied or discussed):
- MTurk sample may limit generalizability to broader populations and to high-stakes public-sector contexts.
- Short-term follow-up (days); longer-term attitude durability beyond the study window is not established.
- The experimental experience was in a labor-task setting; extrapolation to other kinds of ADM deployments (e.g., criminal justice, welfare) should be cautious.
Implications for AI Economics
- Adoption and diffusion models:
- Familiarity via experience with ADM does not automatically translate to public acceptance of AI in policymaking. Economic models that assume experience-driven adoption (learning-by-doing leading to acceptance) may overstate behavioral spillovers from workplace exposure to political support.
- Information campaigns (positive or negative) can materially shift demand for public-sector AI adoption; planners should account for information externalities when forecasting uptake or political feasibility.
- Political economy of regulation:
- Public opinion is malleable to substantive information, suggesting education/communication strategies can shape regulatory outcomes. This implies elected officials and regulators have an actionable lever (information provision) that can affect consent to algorithmic governance.
- Because informational exposure can override prior dispositions and personal experience, interest groups and journalists who shape narratives about AI can produce meaningful political externalities; regulatory equilibrium may be endogenous to media/information dynamics.
- Welfare and distributional considerations:
- Worker-level behavioral changes under algorithmic management (productivity, effort, willingness to work) imply firm-level and labor-market effects that can feed back into political economy (e.g., support for automation policies may depend on observable workplace outcomes, not just abstract beliefs).
- Economists modeling labor-market impacts of AI should incorporate both behavioral responses to algorithmic management and the separable effects of public information on policy support (which can affect redistribution, social insurance, or retraining program adoption).
- Policy design and signaling:
- Transparency and public education about an AI system’s benefits/risks may be more effective politically than relying on gradual exposure to the technology. Investments in clear, evidence-based communication can change acceptance and legitimation trajectories.
- Conversely, negative exposés (e.g., bias reports) can rapidly erode support, producing political costs that affect procurement and the scale of deployments.
- Research directions:
- Need for integrating informational dynamics into models of AI-driven public-good provision and regulatory choice.
- Importance of heterogeneous-population studies (worker types, socioeconomic status, politically salient groups) to predict distributional political responses to AI deployment.
- Value of longer-run field experiments and higher-stakes domain replications (healthcare, criminal justice, welfare) to quantify durable political and economic effects.
Limitations and caveats to apply in economic work: experimental context (MTurk) and short follow-up window constrain external validity; effects may differ when elites take clear partisan stances or when deployments are high-stakes.
Assessment
Claims (6)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Personal experience with an AI 'boss' affected workers' job performance. Team Performance | mixed | medium | workers' job performance (task performance across panel waves) |
n=1500
0.6
|
| Personal experience with an AI 'boss' did not affect workers' attitudes on using AI in public decision making. Governance And Regulation | null_result | medium | attitudes toward using AI in public decision making |
n=1500
No measurable effect on attitudes about AI in public decision making
0.6
|
| Exposure to information about the technology produced significant attitudinal change, even when it conflicted with participants' prior disposition or direct experience. Governance And Regulation | mixed | medium | change in attitudes toward AI in public decision making after information exposure |
n=1500
Information exposure produced statistically significant attitudinal change
0.6
|
| Task content and valence were randomized in the experiment. Other | null_result | high | experimental manipulation variables: task content and task valence |
n=1500
Task content and valence randomized
1.0
|
| The study tracked participants in a three-wave panel totaling over 1,500 workers. Other | null_result | high | longitudinal measurements of job performance and attitudes across three waves |
n=1500
Three-wave panel, >1,500 workers
1.0
|
| The results highlight the promise of incorporating public input into AI governance. Governance And Regulation | positive | speculative | implication for AI governance: receptiveness to public input after informational interventions |
n=1500
0.1
|