The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Many AI projects are abandoned for practical reasons, not just ethics: a review and practitioner data show funding shortfalls, development hurdles and internal dynamics often halt systems long before deployment. Responsible AI efforts should therefore engage upstream decisions that determine which systems ever get built, not only how deployed systems behave.

To Build or Not to Build? Factors that Lead to Non-Development or Abandonment of AI Systems
Shreya Chappidi, Jatinder Singh · April 30, 2026 · ArXiv.org
openalex descriptive medium evidence 7/10 relevance Source PDF
Through a scoping review, thematic analysis, incident-case evidence, and a practitioner survey, the paper identifies six categories of factors — ethical concerns, stakeholder feedback, development lifecycle challenges, organizational dynamics, resource constraints, and legal/regulatory concerns — that drive AI project abandonment and shows that many abandonment decisions are motivated by practical, non-ethical levers.

Responsible AI research typically focuses on examining the use and impacts of deployed AI systems. Yet, there is currently limited visibility into the pre-deployment decisions to pursue building such systems in the first place. Decisions taken in the earlier stages of development shape which systems are ultimately released, and therefore represent potential, but underexplored, points for intervention. As such, this paper investigates factors influencing AI non-development and abandonment throughout the development lifecycle. Specifically, we first perform a scoping review of academic literature, civil society resources, and grey literature including journalism and industry reports. Through thematic analysis of these sources, we develop a taxonomy of six categories of factors contributing to AI abandonment: ethical concerns, stakeholder feedback, development lifecycle challenges, organizational dynamics, resource constraints, and legal/regulatory concerns. Then, we collect data on real-world case of AI system abandonment via an AI incident database and a practitioner survey to evidence and compare factors that drive abandonment both prior to and following system deployment. While academic responsible AI communities often emphasize ethical risks as reasons to not develop AI, our empirical analysis of these cases demonstrates the diverse, and often non-ethics-related, levers that motivate organizations to abandon AI development. Synthesizing evidence from our taxonomy and related case study analyses, we identify gaps and opportunities in current responsible AI research to (1) engage with the diverse range of levers that influence organizations to abandon AI development, and (2) better support appropriate (dis)engagement with AI system development.

Summary

Main Finding

The paper maps why organizations decide not to build or to stop building AI systems and shows that abandonment is driven by a broad set of levers beyond ethical concerns. Using a multivocal scoping review and empirical case collection, the authors develop a six-category taxonomy (ethical concerns; stakeholder feedback; development lifecycle challenges; organizational dynamics; resource constraints; legal/regulatory concerns) and demonstrate that many real-world abandonment decisions are motivated by practical, economic, technical, and governance factors as well as ethics. Early-stage (pre-deployment) decisions to not develop are an underexplored but important governance lever.

Key Points

  • Taxonomy: Six categories of factors contributing to AI non-development or abandonment:
    • Ethical concerns (discrimination, privacy, misinformation, misuse, labor displacement, environmental impact, loss of agency, etc.)
    • Stakeholder feedback (engagement or external resistance from employees, users, civil society, domain experts, activists)
    • Development lifecycle challenges (unable to measure target, data problems, labeling/ground truth issues, poor model performance, integration/evaluation problems)
    • Organizational dynamics (misalignment with strategy, leadership sponsorship, changing priorities, client refusal, IP/asset concerns)
    • Resource constraints (insufficient technical expertise, compute costs/availability, development/maintenance cost, timeline length, outsourcing incentives)
    • Legal/regulatory concerns (AI-specific regulation, data protection, domain rules, liability, lack of guidance)
  • Methodological emphasis: authors intentionally include grey literature, civil society resources, and journalism to capture non-academic, pragmatic drivers of abandonment that conventional academic searches miss.
  • Empirical finding: while RAI research often foregrounds ethical risks as a reason to avoid AI, their case data (incident database + practitioner survey) show that many abandonment decisions are driven by non-ethics factors (cost, technical infeasibility, stakeholder pushback, regulatory uncertainty) or by mixtures of reasons.
  • Framing: abandonment / non-development is not inherently bad — it can prevent harmful or poorly functioning systems, but can also foreclose beneficial systems that failed for solvable practical reasons.
  • Research gap: RAI work should expand focus from mitigation of harms in deployed systems to the incentives, processes, and tools that shape early-stage decisions to build or not build.

Data & Methods

  • Scoping multivocal literature review combining academic databases (Google Scholar, ACM Digital Library, Semantic Scholar) and grey literature (industry reports, white papers, tech journalism, civil society repositories).
  • Search strategy used combinations of terms like [“not” OR “stop” OR “abandon” OR “halt”] AND [“develop” OR “build”] AND [“AI” OR “ML”]; initial review of first 50 results per source.
  • Iterative process:
    • 48 initial sources selected
    • Snowballing added 32 sources
    • Category-specific searches added ~80 sources
    • Inductive thematic analysis by lead researcher to produce six-category taxonomy; categories iterated with team consensus
  • Empirical evidence:
    • Collected real-world abandonment cases via a public AI incident database and an online practitioner survey (used to compare factors that drive abandonment pre- vs post-deployment). (The paper contains full sample sizes and case-selection details.)
  • Analysis: thematic coding of literature and cases to map factors and compare the prominence and timing (pre-deployment vs post-deployment) of different drivers.

Implications for AI Economics

  • Selection effects in deployed AI: Abandonment shapes which AI systems reach the market. Economic assessments of AI impacts (productivity, labor displacement, sectoral change) must account for the censoring effect — many proposed systems never reach deployment because of costs, feasibility, stakeholder resistance, or regulation. Ignoring abandonment risks overestimating diffusion and harms/benefits.
  • Investment and R&D allocation:
    • Resource constraints (compute costs, expertise gaps, long timelines) and uncertain regulation materially affect firms' investment choices. These act as private-sector brakes on some AI development paths, changing which innovations receive funding.
    • The paper suggests that cheaper/easier-to-outsource projects are more likely to be pursued, affecting the distribution of in-house capabilities and market concentration (e.g., reliance on external vendors/cloud providers).
  • Opportunity cost and sunk costs:
    • Abandoned projects represent sunk R&D expenditures and lost opportunity value (both social and private). Understanding drivers of abandonment can inform more efficient R&D portfolio choices and public subsidy design (e.g., when public funding should de-risk socially valuable but high-cost AI projects).
  • Labor markets & organizational behavior:
    • Organizational dynamics and stakeholder resistance influence which internal AI capabilities firms build vs. outsource or drop. This affects skill development, labor demand for AI roles, and the supply of maintenance expertise.
  • Regulatory design and signaling:
    • Regulatory uncertainty and liability concerns can deter development, producing a precautionary effect. Policymakers should balance preventing harmful systems with not stifling socially valuable innovation. Clear guidance, proportionate compliance costs, and pathway-specific risk assessments can reduce inefficient abandonment while enabling precaution where needed.
  • Governance levers for economists and policymakers:
    • Make non-development visible and reportable: standardized reporting on abandoned AI projects would improve measurement of AI diffusion, R&D efficiency, and the incidence of precautionary non-development.
    • Targeted subsidies or technical assistance: to address resource or technical barriers for socially beneficial projects that are abandoned for economic reasons (compute credits, datasets, labeling support).
    • Incentives for early-stage assessment: fund or require early impact assessments and stakeholder engagement to surface high-risk projects before costly development, aligning private incentives with social risk mitigation.
    • Support for alternative pathways: where abandonment is driven by client or stakeholder refusal, policies could support alternative, lower-risk solutions (human-in-the-loop systems, regulation enabling liability-limited pilots).
  • Research implications for AI economics:
    • Incorporate abandonment data into models of diffusion, welfare, and labor displacement to avoid biases from observing only deployed systems.
    • Study the relative elasticity of abandonment to changes in compute costs, regulation, or subsidies to estimate policy lever effectiveness.
    • Quantify the macroeconomic effects of systematic abandonment in particular sectors (e.g., public sector, healthcare) where social value vs private incentives diverge.

Suggested practical next steps for economists and policymakers: - Collect and standardize data on AI project starts, pivots, and abandonments (including reasons and costs). - Model how different policy interventions (grants, regulatory clarity, liability rules) affect abandonment probabilities and welfare outcomes. - Design interventions that lower socially inefficient abandonment (e.g., for high-value but resource-intensive projects) while preserving mechanisms that prevent development of harmful systems.

(For implementation details, sample sizes, and full case analyses see the paper’s methods and appendix.)

Assessment

Paper Typedescriptive Evidence Strengthmedium — Combines a systematic scoping review, thematic coding, case examples from an AI incident database, and a practitioner survey to triangulate patterns of AI abandonment — providing credible descriptive evidence that a diverse set of factors drive non-development. However, the analysis is non-causal, likely subject to selection and reporting biases (incidence database and grey literature), and sample sizes/representativeness for the survey and cases are not reported, limiting how confidently findings can be generalized or quantified. Methods Rigormedium — Uses established qualitative methods (scoping review, thematic analysis) and supplements them with case data and survey evidence, which is appropriate for exploratory work; but rigor is limited by likely non-systematic elements in grey literature inclusion, potential coder subjectivity (no mention of intercoder reliability), unclear survey recruitment/response rates, and lack of quantitative validation or robustness checks. SampleA scoping review of academic literature, civil society outputs, journalism, and industry/grey reports; case examples drawn from an AI incident database (number of incidents and selection criteria not specified); and a practitioner survey of developers/decision-makers (sample size, sampling frame, and recruitment strategy not reported in the summary). Themesadoption governance org_design GeneralizabilitySelection bias in incident database (unrepresentative, biased toward reported/high-profile failures), Publication and reporter bias in grey literature and journalism sources, Unknown and likely non-representative survey sample (self-selection, geographic and sectoral skew), Temporal limits — findings may reflect a specific period or wave of AI development, Context dependence — organizational culture, regulatory regimes, and industry differences limit transferability

Claims (8)

ClaimDirectionConfidenceOutcomeDetails
Responsible AI research typically focuses on examining the use and impacts of deployed AI systems, and there is currently limited visibility into the pre-deployment decisions to pursue building such systems. Adoption Rate negative high visibility into pre-deployment decision-making for AI development
0.18
The authors performed a scoping review of academic literature, civil society resources, and grey literature (including journalism and industry reports) to identify factors influencing AI abandonment. Other positive high scope and composition of reviewed sources
0.3
Through thematic analysis of reviewed sources, the paper develops a taxonomy of six categories of factors contributing to AI abandonment: ethical concerns, stakeholder feedback, development lifecycle challenges, organizational dynamics, resource constraints, and legal/regulatory concerns. Adoption Rate positive high categorization of factors driving AI abandonment
0.18
The authors collected data on real-world cases of AI system abandonment via an AI incident database and a practitioner survey to evidence and compare factors that drive abandonment both prior to and following system deployment. Adoption Rate positive high observed drivers of AI abandonment in real-world cases (pre- vs post-deployment)
0.18
Academic responsible AI communities often emphasize ethical risks as reasons to not develop AI. Ai Safety And Ethics positive high frequency/degree of emphasis on ethical risks in responsible AI academic literature
0.18
Empirical analysis of cases demonstrates that diverse, and often non-ethics-related, levers motivate organizations to abandon AI development. Adoption Rate mixed high distribution of reasons (ethical vs. non-ethical) cited for AI abandonment
0.18
Decisions taken in earlier stages of development shape which systems are ultimately released, representing potential points for intervention to influence AI deployment outcomes. Adoption Rate positive high influence of early-stage development decisions on eventual system release/abandonment
0.18
Synthesizing evidence, the paper identifies gaps and opportunities in current responsible AI research: (1) to engage with the diverse range of levers that influence organizations to abandon AI development, and (2) to better support appropriate engagement or disengagement with AI system development. Governance And Regulation positive high research and practical opportunities to influence AI development decisions
0.03

Notes