The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

Transformative AI will likely be many, not one: diverse teams of specialized, epistemically distinct AI agents are better positioned to drive deep innovation than lone superintelligences because diversity broadens search, delays premature consensus, and enables unconventional solutions.

The Future of AI is Many, Not One
Daniel J. Singer, Luca Garzino Demo · March 30, 2026 · arXiv (Cornell University)
openalex theoretical n/a evidence 7/10 relevance Source PDF
The authors argue that major scientific and intellectual breakthroughs are more likely to arise from epistemically diverse teams of collaborating AI agents than from single, monolithic superintelligent models.

The way we're thinking about generative AI right now is fundamentally individual. We see this not just in how users interact with models but also in how models are built, how they're benchmarked, and how commercial and research strategies using AI are defined. We argue that we should abandon this approach if we're hoping for AI to support groundbreaking innovation and scientific discovery. Drawing on research and formal results in complex systems, organizational behavior, and philosophy of science, we show why we should expect deep intellectual breakthroughs to come from epistemically diverse groups of AI agents working together rather than singular superintelligent agents. Having a diverse team broadens the search for solutions, delays premature consensus, and allows for the pursuit of unconventional approaches. Developing diverse AI teams also addresses AI critics' concerns that current models are constrained by past data and lack the creative insight required for innovation. The upshot, we argue, is that the future of transformative transformer-based AI is fundamentally many, not one.

Summary

Main Finding

The authors argue that transformative, discovery-oriented AI is more likely to emerge from epistemically diverse teams of AI agents working together than from a single, monolithic superintelligent model. They claim the field’s current individual-focused paradigm (build one bigger model, benchmark single models, plan for “the” AGI) is mis-specified: diversity in model architectures, training, heuristics, and inferential styles increases the chance of breakthrough innovation, delays premature consensus, and mitigates key objections to ambitious AI (lack of creativity, monoculture of thought, and explanation opacity).

Key Points

  • Current paradigm is individual-centric:
    • Industry practice, benchmarks, investment incentives, and alignment research are centered on singular foundation models (e.g., GPT-5.4, Claude Opus 4.6, Gemini 3 Pro).
    • Scaling laws have reinforced the incentive to build ever-larger single models.
  • Three canonical worries about singular AGI/ASI:
  • Lack of genuine innovation due to dependence on past training data.
  • Risk of scientific and intellectual monocultures—over‑privileging model-legible questions/methods.
  • Opacity of explanations—models cannot give trustworthy, human‑style reasons.
  • Evidence and theoretical backing for many-over-one:
    • Literature from complex systems, philosophy of science, organizational behavior, and computational social science (e.g., Hong & Page 2004; Grim et al.; Lazer & Friedman) shows epistemically diverse groups outperform single experts on hard problems.
    • Three specific group benefits apply to AI teams:
    • Hypothesis breadth — different agents explore different regions of problem space, increasing the chance of finding novel solutions.
    • Sustained exploration — heterogeneity prevents premature convergence on misleading paths.
    • Parallel pursuit of competing aims — some agents can be conservative/accuracy-focused while others are speculative/innovative, avoiding the trade-off being borne by a single system.
  • Historical case studies support the mechanism: COVID‑19 vaccines, CRISPR discoveries, DNA structure, peptic ulcer research — breakthroughs emerged from distributed, competing lines of inquiry rather than a single central agent.
  • Multi-agent or ensemble approaches currently in use do not fully capture the epistemic diversity the authors advocate; they discuss this gap and give practical takeaways for designing genuinely diverse AI teams.

Data & Methods

  • Method: conceptual synthesis and theoretical argumentation drawing on:
    • Formal results (notably Hong & Page’s diversity theorem and related models about problem-solving dynamics).
    • Literature review across disciplines: complex systems, philosophy of science, organizational behavior, computational social science.
    • Historical case examples (vaccines, CRISPR, DNA, peptic ulcers) to illustrate mechanisms in practice.
    • Discussion of agent-based and networked community modeling results (Zollman, Lazer & Friedman, Centola et al.).
  • No new large-scale empirical dataset or experimental intervention is reported; the paper builds an interdisciplinary theoretical case and connects it to ongoing AI development practices.

Implications for AI Economics

  • Incentives and investment:
    • The current single-model race is driven by benchmarks and winner-take-all signals (leaderboards, press, valuation). Shifting to many-agent paradigms would require new metrics and market signals (benchmarks for teams/ensembles, valuation models for composed systems).
    • Funding may rotate from betting on “the” next foundation model to portfolios of heterogeneous agents or platforms that compose specialized agents—changing risk/return profiles and capital allocation strategies.
  • Product and market structure:
    • Economic value may accrue to platforms that enable easy composition, orchestration, and marketplace dynamics of diverse agents (agent-as-service, modular agent marketplaces), shifting rents from single-model IP to ecosystem and coordination services.
    • Specialization and comparative advantage: firms can offer vertically or horizontally specialized agents (explainers, hypothesis generators, evaluators), creating modular markets and more complementarities across firms.
  • Labor and R&D organization:
    • Teams of AI agents reduce some coordination costs humans face, enabling firms to invest in broader exploration with lower marginal costs—this could accelerate discovery while changing the skill sets demanded of human researchers (systems integration, agent design, governance).
    • R&D portfolio strategies should emphasize diversity of architectures, datasets, and training objectives rather than maximal scaling of one architecture.
  • Competition, regulation, and governance:
    • Alignment and safety economics must move beyond single-agent alignment to system-level risks (coordination failures, emergent collusion among agents, systemic bias amplification). Regulatory frameworks and insurance models will need to assess ensembles and marketplaces, not just single models.
    • Antitrust and competition policy may need to consider markets for agent composition and datasets, since network effects and platform control over orchestration could become economically powerful.
  • Innovation and social welfare:
    • A many-agent approach may increase the social returns to AI-driven discovery by reducing the probability of locked-in, incorrect paradigms and by accelerating diverse, parallel experimentation.
    • But there are costs: transaction/coordination costs, verification overheads, and potential new externalities (e.g., amplified misinformation through coordinated agents). Economic policy should aim to lower coordination costs for beneficial diversity while internalizing systemic risks.
  • Practical economic recommendations (implicit in the authors’ argument):
    • Develop benchmarks and business KPIs for ensembles and team-level performance.
    • Invest in platforms and standards that enable composability and provenance (to capture complementarities and manage risk).
    • Diversify R&D portfolios across architectures, datasets, and training objectives to capture optionality and reduce model‑centric monoculture risk.
    • Reframe alignment work to include multi-agent dynamics and market/ecosystem governance.

Summary takeaway: Economically and scientifically, treating AI as an ecosystem of complementary, diverse agents (rather than pursuing a single supreme model) changes incentive structures, product architectures, and governance needs in ways that likely increase the probability of transformative discovery while reducing several major epistemic and systemic risks.

Assessment

Paper Typetheoretical Evidence Strengthn/a — The paper is a conceptual and theoretical argument drawing on existing literature and formal results rather than presenting original empirical causal evidence, so traditional evidence-strength judgments for causal inference do not apply. Methods Rigormedium — The authors synthesize results from complex systems, organizational behavior, and philosophy of science and reference formal results to support their claims, which shows intellectual rigor; however, the piece lacks original empirical tests, formalized models with empirical calibration, or simulations that would strengthen and operationalize the argument. SampleNo original empirical sample; the paper is a literature- and theory-driven synthesis that draws on prior theoretical results and empirical studies from complex-systems research, organizational behavior, and philosophy of science rather than new data. Themesinnovation human_ai_collab org_design productivity GeneralizabilityArgument is conceptual and not empirically validated on deployed AI systems, so real-world applicability is uncertain, How to operationalize ‘epistemic diversity’ among agents is underspecified and may not scale across domains, Practical constraints (compute costs, engineering complexity, coordination and incentive design) may limit adoption, Claims may depend on domain characteristics (scientific discovery vs. routine tasks) and thus not generalize across all AI uses, Potential regulatory, safety, and interpretability concerns could restrict real-world implementation

Claims (6)

ClaimDirectionConfidenceOutcomeDetails
The way we're thinking about generative AI right now is fundamentally individual (this appears in how users interact with models, how models are built, how they're benchmarked, and how commercial and research strategies using AI are defined). Adoption Rate negative high conceptual framing and practices around generative AI (individual-focused design and evaluation)
0.06
We should abandon the individual approach if we're hoping for AI to support groundbreaking innovation and scientific discovery. Innovation Output positive high ability of AI to support groundbreaking innovation and scientific discovery
0.02
Deep intellectual breakthroughs should be expected to come from epistemically diverse groups of AI agents working together rather than singular superintelligent agents. Innovation Output positive high occurrence of deep intellectual breakthroughs (scientific/innovative discoveries)
0.02
Having a diverse team broadens the search for solutions, delays premature consensus, and allows for the pursuit of unconventional approaches. Innovation Output positive high search breadth, timing of consensus formation, and pursuit of unconventional solutions
0.06
Developing diverse AI teams addresses critics' concerns that current models are constrained by past data and lack the creative insight required for innovation. Creativity positive high creative insight and capacity for innovation in AI systems
0.02
The future of transformative transformer-based AI is fundamentally many, not one. Innovation Output positive high architectural and organizational form of future transformative AI (multi-agent/diverse-team orientation versus single-agent/superintelligence)
0.02

Notes