The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Automated state systems boost administrative efficiency but erode democratic legitimacy: AI improves routine tasks and fiscal forecasting yet opaque algorithms (notably Australia’s Robo‑debt) produce procedural injustice. Without value-based transparency and role-sensitive explainability, the rise of algorithmic governance deepens an AI Core–Global South divide and risks long-term digital dependency.

Artificial Intelligence, Public Policy and Governance - implications for Economic Management and Political Systems
Glory Mmerechi Triumph Okereke, Philip Williams Appiah-Agyei · Fetched April 25, 2026 · Journal of Scientific Research and Reports
semantic_scholar review_meta medium evidence 7/10 relevance DOI Source
A PRISMA-guided systematic review (2018–2026) finds that AI-driven governance delivers measurable efficiency and forecasting gains in routine administration but generates an 'efficiency–legitimacy paradox'—opaque algorithmic decision-making undermines procedural justice and risks creating digital dependency for the Global South.

As a General-Purpose Technology (GPT), Artificial Intelligence (AI) is fundamentally reconfiguring state capacity, as well as the mechanics of global economic management. This systematic review examines current research studies (2018-2026) to assess the socio-political consequences of artificial intelligence-driven governance in three key dimensions - policy integration, economic consequences and democratic legitimacy. Following the guidelines of the Preferred Reporting of Items in a Systematic Review and Meta-Analysis (PRISMA), the outcomes of this review show a structural shift from "street level" bureaucracies to "system-level" architectures that can be defined as the institutional division of "Artificial Discretion" to algorithmic infrastructures - with empirical evidence showing great gains in efficiency at routinised administrative tasks and fiscal forecasting, offsets by a growing "Efficiency-Legitimacy Paradox". The findings reveal the importance of the "black box" nature of automated systems, epitomised by the Australian 'Robo-debt' scandal, that undermines the democratic social contract and principles of procedural justice. Furthermore, the synthesis presents a stark geopolitical divide between the "AI Core" nations and the Global South; the latter group faces acute risks of "Digital Dependency" as well as eroded digital sovereignty. In order to alleviate these types of tensions, the review examines the effectiveness of visibility mechanisms, such as public algorithm registers or role-sensitive explainability, in regaining citizen trust. The study concludes that the sustainability of the algorithmic state rests on a movement from technocratic secrecy to value-based transparency that will ensure that AI- and human collaboration is founded on institutional accountability and algorithmic justice.

Summary

Main Finding

AI as a General-Purpose Technology is driving a structural reconfiguration of state capacity and of how governments manage economies. The reviewed literature (2018–2026), synthesized under PRISMA principles, documents a shift from "street-level" bureaucratic discretion to "system-level" algorithmic architectures — an institutional transfer the review labels "Artificial Discretion." This transfer yields measurable efficiency gains (especially for routinised administration and fiscal forecasting) but generates an "Efficiency–Legitimacy Paradox": automating adjudication and enforcement can improve throughput while simultaneously eroding procedural justice and democratic legitimacy unless transparency and accountability are rebuilt on value-based terms.

Key Points

  • Artificial Discretion: governments are reallocating decision authority from frontline public servants to algorithmic systems, changing who (and what) exercises discretion in public policy.
  • Efficiency gains: robust evidence of improved processing speed, standardization, and forecasting accuracy in administrative tasks and fiscal management when AI/automated systems are deployed.
  • Efficiency–Legitimacy Paradox: bureaucratic efficiency improvements are frequently offset by legitimacy costs — opaque decision logic, errors with systemic effects, and reduced avenues for citizen contestation.
  • Black-box harms: opaque algorithmic systems undermine procedural justice; high-profile cases (e.g., Australia’s Robo-debt) exemplify how automated enforcement can violate rights, destroy trust, and provoke costly political backlash.
  • Geopolitical divergence: a clear split between "AI Core" states (that develop and control advanced algorithmic governance capabilities) and many countries in the Global South, which face risks of digital dependency, reduced digital sovereignty, and asymmetric leverage in economic governance.
  • Visibility mechanisms: public algorithm registers, role-sensitive explainability, mandated audits, and institutionalized human oversight are recurring policy prescriptions shown to mitigate legitimacy harms and restore citizen trust.
  • Governance transition: sustainability of an "algorithmic state" depends less on pure technical fixes and more on institutional reforms — shifting from technocratic secrecy toward value-based transparency, accountability, and algorithmic justice.

Data & Methods

  • Review scope: systematic review of peer-reviewed studies, policy reports, and empirical cases published 2018–2026 addressing AI-driven governance, policy integration, economic consequences, and democratic legitimacy.
  • PRISMA-based workflow: the synthesis followed structured search, screening, eligibility, and inclusion stages to identify relevant empirical and theoretical work across disciplines (public administration, political science, economics, law, and STS).
  • Evidence types: mixed evidence base including qualitative case studies (e.g., Robo-debt), comparative cross-country analyses, quantitative evaluations of administrative efficiency and fiscal forecasting, survey-based trust measures, and normative/legal analyses.
  • Synthesis methods: thematic synthesis for qualitative findings; where comparable quantitative outcomes existed (e.g., processing time, forecasting error), aggregated comparisons and meta-analytic summaries were used to assess direction and robustness of effects.
  • Limitations noted in source literature: heterogeneity in outcome measures, under-reporting of negative results, selection bias toward high-profile cases, and uneven geographic coverage (concentration on AI Core countries).

Implications for AI Economics

  • Public finance and fiscal capacity: AI can improve tax administration, benefits delivery, and fiscal forecasting, potentially increasing revenue mobilization and reducing leakage — but legitimacy losses and subsequent protests/litigation can negate fiscal gains or raise enforcement costs.
  • Labour and skills in the public sector: automation will reallocate tasks away from routine public-facing roles toward oversight, audit, and policy design roles — creating transitional displacement risks and requiring reskilling and institutional redesign.
  • Distributional outcomes: automated governance can entrench biases and create asymmetric harms (e.g., false positives in enforcement) that disproportionately affect vulnerable populations, increasing inequality and welfare losses unless countervailing measures are implemented.
  • Transaction and political costs: while algorithms can lower administrative transaction costs, legitimacy deficits can increase political transaction costs (appeals, litigation, policy reversals), affecting the net economic benefit of automation.
  • Sovereignty and geopolitical economy: dependence on external AI platforms and proprietary tools concentrates technological rents in AI Core economies, risks digital dependency for the Global South, and can exacerbate unequal bargaining power in trade, aid, and infrastructure finance.
  • Market structure and rents: concentration of algorithmic governance platforms could create new sources of monopoly or oligopoly rents with implications for public procurement, competition policy, and the fiscal burden of purchasing/adopting governance AI.
  • Policy prescriptions with economic relevance:
    • Invest in domestic AI capacity and intergovernmental cooperation to mitigate digital dependency and preserve policy sovereignty.
    • Treat algorithmic transparency (registers, standardized explainability) as a public good that reduces legitimacy risks and potentially lowers long-run enforcement costs.
    • Build institutional accountability (audits, impact assessments, role-sensitive explainability) into procurement and deployment to internalize social costs and protect fiscal returns from automation.
    • Fund reskilling programs and social safety nets to manage labor transitions within the public sector and maintain service quality.
    • Incorporate political and legitimacy risk assessments into cost–benefit analyses for AI adoption in government, not just technical efficiency metrics.

Overall, the economic promise of AI for governments is substantial, but realization depends on governance choices: preserving legitimacy, distributing gains fairly, and building sovereign capacity will determine whether AI strengthens state capacity sustainably or produces short-term efficiency that yields long-term political and economic liabilities.

Assessment

Paper Typereview_meta Evidence Strengthmedium — The review synthesizes mixed evidence (case studies, qualitative analyses, policy reports and a smaller number of empirical evaluations) that consistently documents efficiency gains and legitimacy risks, but few reviewed studies provide strong causal identification or large-scale quantitative estimates; conclusions therefore rest on convergent but partly circumscribed evidence. Methods Rigorhigh — The study follows PRISMA guidelines, uses a systematic search and explicit inclusion/exclusion criteria across 2018–2026 literature, and conducts structured synthesis of policy cases and empirical work; limitations remain due to heterogeneity of source studies, potential publication/English-language bias, and the inability of the review itself to generate new causal estimates. SampleA systematic review of literature published 2018–2026 including peer-reviewed articles, policy reports, government inquiries, and gray literature on AI-driven governance and public administration; includes case studies (e.g., Australia's Robo-debt), comparative policy analyses, qualitative fieldwork, and some quantitative studies on administrative efficiency and fiscal forecasting, with geographic coverage skewed toward high-income 'AI Core' countries and selective evidence from the Global South. Themesgovernance adoption inequality productivity GeneralizabilityGeographic bias toward high-income/’AI Core’ countries limits applicability to low- and middle-income contexts, Heterogeneous study designs (case studies, qualitative work, few causal quantitative studies) impede broad causal generalization, Findings are time-bound to 2018–2026 and may not capture fast-evolving AI deployments or regulatory changes, Variation in institutional, legal and bureaucratic settings reduces transferability of specific lessons, Potential publication and English-language selection bias in the reviewed literature

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
As a General-Purpose Technology (GPT), Artificial Intelligence (AI) is fundamentally reconfiguring state capacity, as well as the mechanics of global economic management. Organizational Efficiency mixed high state capacity and the mechanics of global economic management
0.24
There is a structural shift from 'street level' bureaucracies to 'system-level' architectures that can be defined as the institutional division of 'Artificial Discretion' to algorithmic infrastructures. Organizational Efficiency mixed high institutional/administrative architecture (shift from street-level to system-level)
0.24
Empirical evidence shows great gains in efficiency at routinised administrative tasks. Organizational Efficiency positive high efficiency in routinised administrative tasks
0.24
Empirical evidence shows great gains in efficiency in fiscal forecasting. Fiscal And Macroeconomic positive high accuracy/efficiency of fiscal forecasting
0.24
These efficiency gains are offset by a growing 'Efficiency-Legitimacy Paradox' (i.e., improvements in efficiency come with worsening legitimacy concerns). Governance And Regulation mixed high trade-off between administrative efficiency and democratic legitimacy/procedural justice
0.24
The 'black box' nature of automated systems undermines the democratic social contract and principles of procedural justice, epitomised by the Australian 'Robo-debt' scandal. Governance And Regulation negative high democratic legitimacy and procedural justice
0.24
There is a stark geopolitical divide between 'AI Core' nations and the Global South; the Global South faces acute risks of 'Digital Dependency' and eroded digital sovereignty. Governance And Regulation negative high digital dependency and digital sovereignty
0.24
Visibility mechanisms, such as public algorithm registers or role-sensitive explainability, can be effective tools in regaining citizen trust. Governance And Regulation positive medium citizen trust in algorithmic governance
0.07
The sustainability of the algorithmic state rests on a movement from technocratic secrecy to value-based transparency to ensure AI- and human collaboration is founded on institutional accountability and algorithmic justice. Governance And Regulation positive high sustainability of algorithmic/state governance (accountability and algorithmic justice)
0.04
This review was conducted following the guidelines of the Preferred Reporting of Items in a Systematic Review and Meta-Analysis (PRISMA). Other null_result high methodological adherence to PRISMA reporting standards
0.24

Notes