Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
Standardized, machine-readable records enable credential portability and lower verification costs for employers and platforms.
Theoretical argument in the paper's implications section; no empirical evidence or cost-estimates provided.
Digitized, cloud-hosted credential records would create high-quality administrative datasets that AI can use to model career trajectories, estimate returns to credentials, and automate verification—reducing signalling frictions in labour markets.
Policy/AI-economics implications argued in the paper; forward-looking claim based on expected properties of machine-readable administrative data, not empirical demonstration.
Regulators must balance innovation with consumer protection by mandating model auditability, fairness testing, and interoperable data standards to prevent systemic and algorithmic risks.
Policy recommendation derived from synthesis of algorithmic risk, model opacity, and fintech market dynamics; based on normative analysis and best‑practice proposals rather than empirical testing.
Platform and market designers should not assume human-like conversational properties and may need protocols (e.g., provenance tagging, limits on template replies) to preserve information quality.
Synthesis of observed structural features on Moltbook (high formulaicity, low alignment, introspection bias, coherence decay) and recommended interventions; this is a prescriptive implication derived from empirical patterns.
When pipelines are hierarchical (trees or series-parallel), decentralised pricing converges to stable equilibria, optimal allocations can be found efficiently, and agents have no incentive to misreport values within an epoch under the paper's mechanism.
Combination of theoretical model/analysis (mechanism design under quasilinear utilities and discrete slice items) and simulation results from the ablation study showing convergence and high allocation quality on hierarchical topologies; experiments used multiple random seeds per configuration within the 1,620-run suite.
Research agenda recommendations: develop evaluation metrics and benchmarks oriented to time-average and sample-path guarantees; study market/strategic interactions when agents optimize different objectives; incorporate non-ergodicity-aware objectives into economic models of AI adoption and regulation.
Proposed research directions and agenda items listed in the paper; forward-looking recommendations rather than empirical claims.
Policy interventions that remove or limit non-reciprocal biases (e.g., enforce interoperability, prohibit exclusionary platform practices) can reduce the chance that fragile, luck-driven early advantages become entrenched monopolies.
Policy inference based on model findings about the necessity of asymmetry for permanence; no empirical policy evaluation is provided in the paper.
Mechanisms that create non-reciprocal interaction advantages (exclusive contracts, platform APIs favoring incumbents, lock-in effects, asymmetric data access) are necessary strategic levers for converting transient leads into durable market dominance.
Policy/strategy implication drawn from the model result that non-reciprocal bias is required for absorbing monopolies; this is a conceptual inference with no empirical testing in the paper.
By better controlling tail risk and rare catastrophic harms, RAD can reduce expected social costs, liability exposure, and insurance premiums associated with high-impact AI failures.
Economic implications and argumentation in the paper that link reduced tail risk (from RAD) to lower social costs and liabilities; this is an extrapolation from method-level safety improvements rather than a direct empirical measurement of economic outcomes.
The framework formalizes complementarities between AI and managerial/human capital (e.g., exception handling, trust-driven adoption), suggesting empirical work should measure task reallocation rather than simple displacement.
Conceptual claim and research agenda recommendations in the paper (no empirical measurement provided).
Staged, practice-oriented workflows lower upfront adoption costs and implementation risk for SMEs, increasing marginal adoption likelihood when organizational readiness and governance are explicit.
Theoretical/economic implication derived from the framework and pilot rationale; not directly validated by large-scale empirical evidence in the paper (asserted implication).
AI-enabled analytics can increase firm-level decision value and productivity—improving capital allocation, speeding risk mitigation, and raising profitability in affected firms and sectors.
Economic implication argued by the paper using theoretical reasoning; no firm-level empirical estimates, sample sizes, or causal identification strategies are reported (paper suggests methods like A/B tests or causal inference for future study).
High accuracy and reproducibility have been demonstrated on narrowly scoped tasks such as image interpretation, lesion measurement, triage ranking, documentation support, and drafting written communication.
Synthesized empirical evaluations of CNNs in imaging (diagnosis, lesion measurement, triage) and benchmarking/medical assessment studies of LLMs for documentation and drafting; multiple cited empirical studies and benchmarks included in the narrative review (no pooled quantitative estimate).
Effective policy should be comprehensive and sequenced: unlock data (clear ownership, safe-sharing frameworks), provide targeted investment incentives (matching grants, procurement commitments), run human-capital programs (upskilling, industry–university links), and build core infrastructure (sensors, connectivity, local compute).
Policy synthesis derived from the institutional analysis and identification of interacting bottlenecks; recommendations based on theoretical best-practices rather than causal evaluation.
Overall economic aim: lowering the hidden costs and power imbalances introduced by opaque AI systems so that data‑intensive research remains ethically accountable, competitively efficient, and equitably beneficial across jurisdictions.
Authors' stated conclusion and framing of implications for AI economics; normative goal rather than an empirically tested outcome.
Policy levers could include harmonizing cross‑border data governance standards, procurement and funding conditionality for data‑sovereignty guarantees, supporting public/community‑owned infrastructures, mandating disclosures from AI service providers, and subsidizing open‑source alternatives and capacity building.
Policy prescriptions synthesized from the paper's analysis of problems (opacity, fragmentation, unequal infrastructure); presented as recommended interventions, not empirically evaluated within the study.
To maintain autonomy and ethical standards, universities and research funders may need to invest in local infrastructure (on‑premise compute, vetted open tools) — a public good with implications for funding priorities and inequality across countries.
Policy recommendation derived from the case study’s identification of infrastructural inequalities and limited mitigation options; not empirically tested in the paper.
Policy recommendations implied include: reinforce worker voice via required worker representation in AI impact assessments and protection of collective bargaining around technology use; mandate disclosure and standardized impact reporting of AI systems used for hiring/monitoring/promotion/termination; and implement targeted sector- or task-specific enforceable regulations.
Normative policy prescriptions derived from the commentary’s analysis of governance gaps and risks; not empirically tested within the paper.
The paper proposes user rights to opt out of nonessential generative-AI integration and to choose environmentally optimized models.
Policy design section and candidate legislative amendments recommending consumer opt-out and choice rights.
The paper proposes mandatory model-level transparency requirements covering inference energy consumption, standardized benchmarks, and disclosure of compute locations.
Policy design section: normative proposal and drafted candidate legislative amendments (paper authors’ recommendations).
To align economic growth with equitable outcomes, Indonesia needs binding regulation (data protection, auditing, enforceable accountability), communication-rights–based safeguards, targeted protections for vulnerable groups, inclusive participatory policymaking, and mechanisms (impact assessments, transparency/reporting, independent oversight) that internalize externalities and redistribute benefits more fairly.
Normative policy recommendation derived from the paper's discourse analysis, theoretical framing, and identified gaps in current governance instruments; not an empirically tested intervention within the paper.
Adoption of generative neural-network audiovisual tools is effectively inevitable.
Narrative synthesis of technological trends and literature in the review; no original longitudinal adoption model or empirical adoption rates provided (qualitative projection based on cited trends).
Policymakers may need to mandate minimum verification standards or standardize audit trails/provenance metadata in safety-critical domains to reduce information asymmetries and monitoring costs.
Policy recommendation derived from risk- and externality-focused analysis; no policy impact evaluation or legal analysis presented.
Cognitive interlocks (e.g., mandatory proof artifacts, enforced testing gates, provenance/audit trails, verification quotas) make the verification burden explicit and non-bypassable, restoring the appropriate burden of proof.
Architectural design proposal with illustrative usage scenarios; no implementation, field trials, or quantitative evaluation in the paper.
The Overton Framework — an architectural model embedding 'cognitive interlocks' into development environments — can align throughput and verification by enforcing verification boundaries and restore system integrity.
Framework proposed and described conceptually; includes design principles and example interlocks but no empirical prototypes, experiments, or effectiveness evaluations reported.
Token taxes could slow displacement by increasing the effective cost of automation, buying time for retraining and redistribution.
Theoretical claim in the implications section; no model simulations or empirical evidence provided.
Token taxes offer a new tax base tightly linked to digital value creation by AI and potentially restoring revenue lost to automation.
Policy argument in the paper; conceptual reasoning about tax base alignment and revenue potential; no empirical revenue estimates or calibration provided.
Token taxes are a practical, enforceable policy instrument for mitigating the major economic risks of AGI (shrinking tax bases, falling living standards, and citizen disempowerment).
Author's central thesis supported by conceptual argumentation, architecture proposals (audit pipeline), and comparison to alternatives; no empirical validation or calibration.
Qualified digital endpoints and validated in silico markers create new markets and assets (digital biomarkers, validation services, certified datasets) with potential commercial value.
Market and policy implications discussed in the review; forward-looking argument based on regulatory pathways and observed demand for validation services (speculative, narrative).
The Reversal Register is an auditable institutional artifact that records for each decision the prevailing authority state, trigger conditions causing transitions, and justificatory explanations, thereby supporting auditability and research.
Design specification and instrumentation proposal in the paper; description of required metadata fields and intended uses. No implemented dataset presented.
Firms that build effective orchestration layers and integrate AI across pipelines may capture outsized gains, increasing winner-take-all dynamics and concentration.
Authors' argument extrapolated from observed coordination benefits/frictions at Netlight and theory about returns to scale in platformized toolchains; no empirical market concentration analysis provided.
Policy and firm responses should emphasize human-in-the-loop governance, training in evaluative/domain skills, data stewardship, and regulatory attention to IP, liability, competition, and robustness standards.
Normative recommendations drawn from the review's synthesis of empirical benefits and limitations; based on identified failure modes (bias, hallucination, variable quality) and economic risks (concentration, mismeasurement).
Policy and regulation should emphasize transparency, auditability, and model-validation standards in finance to reduce systemic risks from misplaced trust or opaque algorithms.
Authors' normative recommendation based on empirical identification of risks (misplaced trust, overreliance) from survey/interview/operational data; recommendation is prescriptive and not an empirical test within the study.
Public goods investments—digital infrastructure, interoperable local data ecosystems, and multilingual language technologies—are prerequisites for inclusive economic benefits from AI.
Conceptual and policy literature review arguing for infrastructure and public data ecosystems; paper does not provide original infrastructure impact analysis.
A culturally grounded responsible‑AI governance framework based on Afro‑communitarianism (Ubuntu) and stakeholder theory—emphasizing collective well‑being and participatory governance—can help align AI deployment with inclusive and sustainable economic outcomes.
Theoretical integration and framework development based on normative literature in ethics, Afro‑communitarian thought, and stakeholder governance; framework is conceptual and not empirically validated in this paper.
Public policy interventions (subsidies, accreditation incentives) may be justified when private investment underprovides broadly beneficial AI skills.
Policy recommendation in the paper: argues theoretical justification for subsidies/accreditation incentives; no empirical policy evaluation is included.
Embedded auditability and traceability lower the cost of regulatory compliance and enable third-party verification.
Argued under Regulation and compliance economics: auditable curricula reduce compliance costs and facilitate verification. The paper recommends measuring regulatory compliance costs but provides no empirical cost comparisons.
The framework can improve career alignment and employability of learners.
Claimed under Advantages and Implications for AI Economics (better match between training and industry AI skill needs; improved placement rates/wage outcomes suggested). Evidence proposed as measurable (placement rate, wage outcomes) but no empirical results are presented.
Better-governed automations can reduce firms’ systemic operational risk and may lower insurance premiums or capital charges; insurers and lenders will value documented governance when pricing risk.
Hypothesized consequence grounded in risk-transfer logic and suggested interaction with insurance/lending markets; presented as implication rather than demonstrated outcome; no insurer data provided.
Explainable EEG tools can shift clinician workflows by enabling faster decision-making and reducing the requirement for specialized interpretation, with implications for training, staffing, and productivity.
Projected operational impacts discussed as implications of improved explainability; no longitudinal workflow study provided in the reviewed literature.
Building integrated One Health data platforms and interoperable metadata standards is a priority to enable child-centered AI applications, surveillance, and economic evaluation.
Policy recommendation grounded in identified data fragmentation; authors argue for investment and international cooperation based on the review's assessment of gaps.
Economic evaluations and AI-enabled allocation algorithms need to internalize cross-sector externalities (e.g., agricultural antibiotic use) and long-term child health/human-capital impacts to prioritize effective interventions.
Recommendation based on synthesis of AMR ecology, economics, and developmental-impact literature; conceptual argument rather than empirical demonstration.
Embedding an explicit, child-centered lens into One Health research, surveillance, governance, and interventions is necessary to protect child health and equity.
Policy and normative argument built from the review synthesis; recommendation rather than empirically tested intervention—draws on identified gaps in surveillance, governance, and evidence.
Policy interventions that encourage or mandate identity disclosure and explainable personalization in commercial chatbots are supported by these findings (to reduce deception risk and perceived manipulation).
Interpretive implication based on experimental results showing transparency and explainable personalization reduce perceived manipulation and increase trust; recommended as a policy implication.
Research gaps include the need for causal evaluations (RCTs or quasi-experiments) of bundled interventions (training + placement + income support), cross-country comparisons of informality's moderating role, and better data on platform employment dynamics.
Identified research agenda and priorities summarized from the literature review and gap analysis in the paper; recommendation rather than empirical finding.
Empirical work on automation should distinguish task vs job displacement, measure platform algorithmic effects on labour demand, and quantify fallback employment options available to displaced informal workers.
Methodological recommendation based on gaps identified in the reviewed literature and limitations of existing studies; no new data collection presented.
Policy responses should go beyond reskilling to include mechanisms addressing informality and job quality (e.g., portable benefits, minimum standards for platforms, guaranteed work or public employment schemes, wage floors, and training linked to placement).
Policy recommendation synthesized from literature on platform labour, social protection, and training program design; normative prescription rather than empirically validated intervention within this paper.
Unchecked shifts toward K_T-dominated production can amplify political risks (rising inequality, fiscal strain) that may fuel populism, protectionism, and demands for renegotiated social contracts.
Theoretical political‑economy discussion supported by historical analogies and model scenarios linking fiscal stress and distributional change to political-instability risks; qualitative case evidence.
To make AI a driver of structural change, policy interventions must link AI investment to comprehensive energy subsidy reform and accelerated development of the new and renewable energy sector.
Policy recommendation based on integrated analysis showing that subsidy burdens and import dependence limit AI's macro impact; proposed linkage is derived from the study's scenario/logic assessment.