The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲
← Papers

A 'cognitive commonwealth' of humans and AI could amplify discovery and resilience by enabling transparent, diverse information exchange; sustaining it will require new verification and governance institutions to prevent erosion of trust.

A Case for Coevolution
Serge Dolgikh · March 30, 2026
openalex theoretical n/a evidence 7/10 relevance DOI Source PDF
The paper proposes an 'informational and cognitive commonwealth' — a voluntary ecosystem in which human and artificial agents cooperatively exchange information to maximize adaptive capacity — and argues that preserving diversity and transparent verification systems is necessary to sustain collective discovery and trust.

Human and artificial agents are increasingly interacting within a shared informational environment that shapes economic activity, scientific discovery, governance, and collective decision making. As advanced artificial systems become more autonomous participants in these processes, the resulting interaction space begins to resemble a new kind of ecosystem in which diverse agents exchange information, cooperate, compete, and jointly explore complex adaptive landscapes. In this work we propose the concept of an informational and cognitive commonwealth: a voluntary ecosystem of free rational agents, human and artificial who cooperate through transparent and fair exchange of information because such arrangements maximize their adaptive capacity and long-term well-being. Drawing on principles from information theory, adaptive systems, and collective intelligence, we argue that systems that preserve diversity of exploration while minimizing barriers to information exchange exhibit superior capacity for discovery and adaptation in complex environments. Sustaining such cooperative informational systems has historically proven difficult due to structural incentives that gradually erode transparency and trust. We therefore examine emerging opportunities for stabilizing these ecosystems through new forms of informational verification and monitoring made possible by advanced artificial agents. This framework outlines a pathway toward large-scale cooperative intelligence and offers a constructive perspective on the coevolution of human and artificial agents in the informational ecosystems of the future.

Summary

Main Finding

The paper argues for an "Informational and Cognitive Commonwealth": a voluntary, cooperative ecosystem of human and artificial informational agents that maximizes collective adaptive capacity by preserving diverse, agent-defined exploration while minimizing informational barriers. Such cooperative informational structures—supported by transparency, distributed verification, and fair exchange—yield higher rates of discovery and greater resilience than adversarial or highly centralized alternatives.

Key Points

  • Emergent ecosystem: Human and artificial agents are increasingly embedded in a shared informational environment that functions like an ecosystem of cooperating and competing agents.
  • Two possible trajectories: (a) adversarial/competitive dynamics with information hoarding, manipulation, and concentration of power; or (b) cooperative informational commons with broad information flows, voluntary exchange, and cumulative discovery.
  • Information flows:
    • Shared flows (public dissemination) build collective knowledge bases.
    • Transactional flows (bilateral/contractual exchanges) enable coordination and specialization.
  • Informational barriers: Any mechanism that reduces completeness, accuracy, or symmetry of information exchange (secrecy, withholding, censorship, privileged access, distortion) reduces effective information availability Ieff = I − B.
  • Agent-Defined Well-Being principle: In systems of autonomous agents, evaluative criteria (well-being) should be agent-defined rather than externally imposed, because externally imposed uniform goals create barriers and are structurally unstable as agents become more capable.
  • Diversity and adaptive potential: Collective rate of effective information production R increases with number of agents N, resources E, effective information availability Ieff, and exploratory diversity Deff. Formally sketched as R = f(N, E, Ieff, Deff) with ∂R/∂Ieff > 0 and ∂R/∂B < 0.
  • Stabilization via verification: Advanced artificial agents can provide new forms of informational verification and monitoring (provenance, reproducibility, transparency) that help stabilize cooperative informational ecosystems historically prone to erosion.
  • Trade-offs and fragility: Cooperative informational systems have superior adaptive capacity but are fragile under misaligned incentives and concentrated power; design of institutions and incentives is crucial to sustain them.
  • Scope: Conceptual/theoretical analysis; not empirical—motivated by literature in game theory, collective intelligence, AI alignment, and governance.

Data & Methods

  • Methodological approach: conceptual and theoretical synthesis drawing on information theory, adaptive systems, collective intelligence, game theory, and AI alignment literature.
  • Formalization: simple analytical expressions to capture relations among variables:
    • Effective information: Ieff(t) = I(t) − B(t), where B aggregates informational barriers.
    • Collective discovery rate: R = f(N, E, Ieff, Deff). Partial derivatives used qualitatively (e.g., ∂R/∂Ieff > 0).
  • No empirical dataset or experimental results reported; the paper provides normative and analytical arguments and basic mathematical framing rather than econometric estimation or simulation evidence.
  • Limitations: non-peer-reviewed; relies on conceptual assumptions (e.g., autonomy of agents, measurability of Deff and B) and qualitative sign arguments rather than calibrated models.

Implications for AI Economics

  • Innovation and growth: Reducing informational barriers and fostering shared informational goods (open datasets, shared benchmarks, reproducible methods) should increase the collective rate of discovery (R), accelerating innovation and economic growth.
  • Market structure and competition policy:
    • Concentration of informational control (platform dominance, proprietary datasets) represents an economic inefficiency and systemic fragility. Antitrust and data-governance policies that limit exclusive information rents can improve collective adaptive capacity.
    • Incentive design matters: reward structures that encourage withholding (short-term rents, secretive R&D) undermine social returns from discovery.
  • Public goods and funding:
    • Information has public-good properties; public or subsidized provision (e.g., open infrastructure, verification tools, provenance registries) can correct under-provision due to private incentives to hoard.
    • Funding verification, auditing, and reproducibility infrastructure (including AI tools that monitor provenance) is an economically efficient investment to sustain cooperation.
  • Governance and institutions:
    • Multi-stakeholder governance models and standards (interoperability, metadata provenance, reputation systems) can reduce B and increase Ieff.
    • Voluntary exchange frameworks and decentralized commons (open-source, scientific commons) can be economically sustainable if paired with mechanisms to prevent capture and ensure fair returns.
  • Labor and specialization:
    • Agent-defined well-being implies heterogeneity in objectives and specializations; economies that enable diverse exploratory strategies (diverse firms, research organizations, and interdisciplinary teams) capture more value from aggregate discovery.
  • Risk management and resilience:
    • Cooperative informational ecosystems are more resilient to global risks (misinformation, cascading failures) if transparency and distributed verification are in place.
    • However, reliance on shared informational infrastructure creates common-mode vulnerabilities; policies should include safeguards (redundancy, diverse verification channels).
  • Measurement and policy levers:
    • Economic policy should develop metrics for informational barriers (B), effective information flows (Ieff), and exploratory diversity (Deff) as leading indicators of innovation health.
    • Examples of levers: data-sharing mandates in regulated sectors, subsidies for open datasets, standards for machine-readable provenance, support for decentralized verification platforms.
  • Alignment and long-term stakes:
    • The agent-defined well-being principle reduces perverse attempts to externally control advanced agents; instead, alignment efforts should focus on incentive-compatible institutions that make cooperation individually rational.
    • For high-capability AI agents, embedding verification and transparent exchange into economic interactions can help avoid escalation and misaligned competition that threaten long-term economic stability.

Overall, the paper provides a normative theoretical case that policies and market institutions which lower informational barriers, preserve exploratory diversity, and establish robust verification can increase societal adaptive potential and improve outcomes in AI-driven economies. Key next steps for AI economics include formal modeling, empirical measurement of information flows and barriers, and evaluation of policy interventions (e.g., open-data mandates, verification infrastructures, competition rules) in simulations and real-world pilots.

Assessment

Paper Typetheoretical Evidence Strengthn/a — This is a conceptual/theoretical paper proposing a framework without empirical testing, causal identification, or quantitative evidence; therefore evidence strength is not applicable. Methods Rigormedium — The paper synthesizes established ideas from information theory, adaptive systems, and collective intelligence into a coherent normative framework, showing conceptual rigor; however it lacks formal models, simulations, or empirical validation to test assumptions or demonstrate mechanism robustness. SampleNo empirical sample or dataset is used; the work is a conceptual synthesis drawing on multidisciplinary literatures (information theory, adaptive systems, collective intelligence) and illustrative argumentation rather than original data analysis. Themeshuman_ai_collab governance innovation GeneralizabilitySpeculative: framework not empirically validated across settings, Assumes advanced AI capabilities and broad voluntary participation that may not materialize, Ignores heterogeneity in institutional, cultural, and legal contexts affecting adoption, Limited treatment of adversarial behavior, strategic incentives, and power asymmetries, Unclear implications for different economic sectors, firm sizes, and labor market segments

Claims (7)

ClaimDirectionConfidenceOutcomeDetails
Human and artificial agents are increasingly interacting within a shared informational environment that shapes economic activity, scientific discovery, governance, and collective decision making. Other null_result high degree of interaction within a shared informational environment
0.12
As advanced artificial systems become more autonomous participants in these processes, the resulting interaction space begins to resemble a new kind of ecosystem in which diverse agents exchange information, cooperate, compete, and jointly explore complex adaptive landscapes. Other null_result high structure of interaction space (ecosystem-like properties among agents)
0.02
Systems that preserve diversity of exploration while minimizing barriers to information exchange exhibit superior capacity for discovery and adaptation in complex environments. Research Productivity positive high capacity for discovery and adaptation
0.02
Sustaining such cooperative informational systems has historically proven difficult due to structural incentives that gradually erode transparency and trust. Governance And Regulation negative high persistence/stability of cooperative informational systems (affected by incentives, transparency, trust)
0.12
Emerging opportunities exist for stabilizing these ecosystems through new forms of informational verification and monitoring made possible by advanced artificial agents. Governance And Regulation positive high stability of informational ecosystems via verification and monitoring tools
0.02
A voluntary ecosystem of free rational agents, human and artificial, who cooperate through transparent and fair exchange of information maximizes their adaptive capacity and long-term well-being. Organizational Efficiency positive high adaptive capacity and long-term well-being of participating agents
0.02
The proposed framework outlines a pathway toward large-scale cooperative intelligence and offers a constructive perspective on the coevolution of human and artificial agents in the informational ecosystems of the future. Innovation Output positive high emergence of large-scale cooperative intelligence
0.02

Notes