The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

AI is set to transform politics and political science: a few corporations will wield outsized power, opaque and biased models will strain democratic institutions, and scholars must urgently develop standards for evaluation, governance, and teaching.

Introduction: Artificial Intelligence, Politics, and Political Science
Nathaniel Persily, Joshua A. Tucker · May 05, 2026
openalex review_meta n/a evidence 7/10 relevance DOI Source PDF
The APSA introductory chapter argues that AI is likely to reshape politics and political science across many domains—concentrating corporate power, producing opaque and potentially biased systems, and raising urgent research, governance, and pedagogical questions.

Introductory chapter of the APSA Presidential Task Force report on "Artificial Intelligence (AI), Politics, and Political Science". It begins by addressing the motivation for the task force: AI’s potential to reshape politics and political science, just as it is transforming other social phenomena and their associated academic fields. Next, it introduces the different sections of the report, including how AI will affect democracy, public administration, national security, international relations, the labor market, public opinion, and the information ecosystem. It also investigates how AI will affect political science research and teaching. It then notes several themes cut across multiple chapters: the unprecedented power of a small number of AI corporations; the opacity and non-replicability of model outputs; bias in AI systems; and the absence of agreed-upon benchmarks for evaluation. An epilogue confronts the rapid emergence of agentic AI tools and poses questions the discipline must address as a result.

Summary

Main Finding

The task-force introduction by Persily & Tucker argues that AI is already reshaping both politics and the practice of political science. Rather than predicting futures, the Report maps current research, highlights pressing questions, and sets an agenda for rigorous empirical study. Key challenges for the field are definitional ambiguity, rapid technological change, geographic and linguistic concentration of model development, and restricted corporate access to the training/data needed for robust social-science inference. Lessons from the social‑media era are useful but insufficient: AI is a broader, faster-moving, and potentially more consequential general‑purpose technology.

Key Points

  • Definition and scope

    • AI is a family of machine-based systems (including generative models, agentic systems, robotics, autonomous vehicles, drones) that operate with varying autonomy to produce predictions, content, recommendations, or decisions.
    • The Report avoids a single definitive definition but stresses inclusiveness: politically relevant AI extends well beyond chatbots and social media content generation.
  • Why this volume

    • Purpose is to map existing research, identify priorities, and provide conceptual/empirical foundations for studying AI’s political effects and how AI changes political science as a profession.
  • Political relevance

    • AI's political impacts span information environments (misinformation, deepfakes), governance (administrative AI in public services), electoral politics (campaign tools), national security (autonomous weapons, procurement), and major societal risks (labor displacement, climate, CBRN risks).
    • Effects could be globally unequal: impacts may be stronger or different in lower‑resource settings where AI models perform worse or cheaper AI lowers the cost of political action.
  • Temporal and technological challenges

    • Models and capabilities change on timescales of weeks–months, creating severe temporal-validity problems for empirical work.
    • Agentic AI and rapidly evolving frontier models magnify uncertainties relative to social media research.
  • Geographic and linguistic concentration

    • Frontier AI development is concentrated in a few countries (notably the U.S. and China), and models are trained predominantly on English and Chinese content, risking worse performance and cultural mismatch elsewhere.
  • Data access and corporate control

    • Training data, model internals, and user interaction logs are largely proprietary—limiting reproducibility and causal inference.
    • Compared to social media, AI firms’ business models may make some model outputs easier to observe but interaction and internal data remain closed.
  • Lessons from social media research

    • Parallels: AI will plug into many existing political science topics (public opinion, campaigning, polarization).
    • Differences: AI is itself methodological infrastructure (not merely new data), and its adoption could transform the research process and academic institutions.
  • Professional impacts

    • AI tools will change how political scientists research, teach, write, and publish; peer-review and editorial systems may be stressed by higher researcher productivity and new forms of output.

Data & Methods

  • Nature of the Report: Edited volume / task‑force synthesis rather than new empirical research. Chapters reflect committee‑based literature review, conceptual framing, and synthesis of extant findings.
  • Methodological takeaways:
    • Use robust conceptual frameworks that are less dependent on specific model instances to retain relevance as models evolve.
    • Beware temporal validity: design research with replication across model generations and include model-versioning as a variable.
    • Combine observational output‑sampling (accessible outputs) with creative field and experimental designs to probe causal effects when internal logs are unavailable.
    • Emphasize cross‑linguistic and cross‑national data collection to mitigate geographic bias of existing models.
  • Core empirical limitations:
    • Restricted access to proprietary training data, model architecture, and user logs constrains internal validity and reproducibility.
    • Rapid model churn complicates inference about persistent causal effects.

Implications for AI Economics

  • AI as a general‑purpose technology

    • Model AI’s macroeconomic role: treat AI like other GPTs (steam, electricity) with economy‑wide productivity spillovers, while explicitly modeling differential sectoral adoption (administration, defense, media, manufacturing).
    • Expect heterogeneous effects: productivity gains in some sectors, displacement and new complementarities in others.
  • Labor market and distributional concerns

    • Potential for large-scale labor displacement in routine tasks; upskilling and reallocation dynamics need modeling.
    • Global inequality risk: countries and languages underrepresented in model training may face worse AI performance and weaker productivity gains, exacerbating cross‑country disparities.
  • Market structure and concentration

    • Concentration of frontier model development implies market power, potential rent extraction, and data‑driven barriers to entry. Economists should study market structure, antitrust implications, and how platform/data ownership affects welfare.
    • Proprietary control over key data and models creates information asymmetries that distort markets for political information and influence.
  • Political economy externalities

    • AI's political effects—information disorder, targeted persuasion, lowered cost of political activities—create nonmarket externalities with economic consequences (e.g., policy volatility, investment uncertainty).
    • National-security AI (autonomous weapons, cyber risks) can shift defense spending, insurance markets, and geopolitical risk premia.
  • Measurement, identification, and policy evaluation

    • Economic research must adapt identification strategies to limited access to internal AI firm data: leverage natural experiments (policy changes, platform outages), randomized encouragement, and synthetic controls using observable outputs.
    • Incorporate model versioning, release dates, and observable capability milestones as treatment variables in causal designs.
  • Regulatory and public‑policy implications

    • Need for data‑sharing regimes, model audits, and public‑interest evaluation datasets to enable independent economic research and informed regulation.
    • Consider market remedies (antitrust, data portability), redistributive policies (universal basic income, retraining programs), and governance for high‑risk AI use in public procurement and defense.
    • International coordination: because model development is concentrated, global governance and technical cooperation matter for cross‑border economic impacts.
  • Research agenda for AI economics (priorities suggested by the Report)

    • Quantify sectoral productivity gains vs. job displacement and the net welfare effects.
    • Study the role of AI in political campaigning and regulatory outcomes that affect market competition.
    • Model the distributional impacts across countries and languages; prioritize data collection in low‑resource settings.
    • Analyze how proprietary data and models affect innovation, entry, and the incentives for safe‑by‑design development.
    • Evaluate public adoption of AI in government services: cost savings, equity, and accountability tradeoffs.

Concluding note: Persily & Tucker urge political scientists — and by extension economists studying political economy — to adopt frameworks and empirical strategies robust to rapid model change, to push for improved data access and public auditing, and to broaden geographic and linguistic coverage so that AI’s economic and political impacts are properly measured and governed.

Assessment

Paper Typereview_meta Evidence Strengthn/a — Introductory chapter of a task-force report synthesizing literature and expert judgment rather than presenting original causal empirical analysis; it does not apply an identification strategy or produce causal estimates. Methods Rigorn/a — The chapter is a conceptual and disciplinary synthesis drawing on task-force deliberations and existing work; it does not report a systematic empirical design or reproducible methods for causal inference. SampleA policy/disciplinary synthesis produced by the APSA Presidential Task Force summarizing existing research, expert deliberations, and thematic assessments across multiple subfields of political science (democracy, public administration, national security, international relations, labor markets, public opinion, and the information ecosystem); no original quantitative sample or dataset is used. Themesgovernance labor_markets adoption human_ai_collab skills_training GeneralizabilityNot empirical — conclusions reflect expert synthesis and judgment rather than statistical generalization., Likely US- and political-science–centric given APSA framing; applicability to other political systems or disciplines may be limited., Broad and high-level; themes are conceptual and may not translate to specific policy contexts or sectoral details., Rapid technological change may quickly outdate some claims (e.g., agentic AI developments or corporate landscape)., Lacks systematic benchmarking or standardized metrics, so cross-context comparison is limited.

Claims (13)

ClaimDirectionConfidenceOutcomeDetails
AI has the potential to reshape politics and political science, similar to how it is transforming other social phenomena and academic fields. Research Productivity mixed high scope and practice of politics and political science as fields (institutional roles, research methods, scholarship topics)
0.04
AI will affect democracy (i.e., democratic processes and institutions). Governance And Regulation mixed high democratic processes and institutions (electoral integrity, civic participation, governance)
0.04
AI will affect public administration. Organizational Efficiency mixed high public administration processes and organizational efficiency (service delivery, decision-making in government agencies)
0.04
AI will affect national security. Governance And Regulation mixed high national security capabilities and decision-making (defense, intelligence operations)
0.04
AI will affect international relations. Governance And Regulation mixed high international relations dynamics (state behavior, diplomacy, conflict/cooperation)
0.04
AI will affect the labor market. Employment mixed high labor market outcomes (employment, occupational change, job tasks)
0.04
AI will affect public opinion and the information ecosystem. Governance And Regulation mixed high public opinion formation and information ecosystem integrity (misinformation, persuasion, information flows)
0.04
AI will affect political science research and teaching. Research Productivity mixed high research methods, replicability, teaching practices, and curriculum in political science
0.24
A small number of AI corporations have unprecedented power. Market Structure negative high concentration of corporate power in the AI industry (market control, platform influence)
0.24
AI model outputs are often opaque and non-replicable. Ai Safety And Ethics negative high transparency and replicability of AI model outputs
0.24
AI systems exhibit bias. Ai Safety And Ethics negative high bias and fairness issues in AI system outputs and decisions
0.4
There is an absence of agreed-upon benchmarks for evaluating AI systems. Ai Safety And Ethics negative high existence of standardized evaluation benchmarks for AI
0.24
The rapid emergence of agentic AI tools raises new questions that the political science discipline must address. Ai Safety And Ethics mixed high policy and research questions arising from agentic AI capabilities (norms, accountability, institutional response)
0.04

Notes