Evidence (2480 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Labor Markets
Remove filter
Security vulnerabilities and IP leakage create negative externalities; absent internalization, social costs (breaches, legal disputes) may rise.
Security analyses, documented incidents, and economic externality reasoning synthesized from the literature; empirical quantification of social cost is limited.
Generated code may incidentally reproduce copyrighted or licensed snippets from training data.
Analyses detecting verbatim or near-verbatim reproductions of licensed/copyrighted code in model outputs in selected tests and audits; evidence heterogeneous and depends on prompts and model/data.
Outputs often lack deep, project-level contextual reasoning (e.g., design tradeoffs, architecture constraints).
Qualitative failure-mode analyses, user studies, and benchmark tasks showing limitations in system-level reasoning and context-aware design decisions; evidence from short-horizon labs and case studies.
There is a risk of shallow learning if learners over-rely on AI outputs without understanding fundamentals.
Educational studies and observational analyses indicating reduced engagement with underlying concepts for some learners using AI assistance, plus qualitative reports from instructors; studies often short-term.
There is a significant political-economy risk that dominant states or firms (an "AI superpower" veto) could block or undermine coordination on token taxes.
Political-economy discussion identifying veto risks and possible deterrent mechanisms; conceptual argumentation without empirical probability estimates.
FLOP taxes face measurement, enforceability, and leakage challenges and tax inputs rather than where value is realized.
Comparative critique presented in the paper; conceptual analysis without empirical measurement of FLOP-tax implementations.
Existing extrapolation‑based projection systems understate AI’s nonlinear, spillover, and augmentation effects and miss differential impacts across occupations, industries, regions, and demographic groups.
Theoretical argument and literature-based reasoning in the paper; no quantitative demonstration comparing extrapolation systems to the proposed approach.
Traditional BLS projection methods are insufficient for forecasting labor market changes driven by rapid AI adoption.
Conceptual critique and argumentation in the paper; no empirical evaluation or comparative forecast error statistics provided.
Rapid post-2020 advances in AI (LLMs and multimodal models) have already rendered some pre-2020 profession-level conclusions obsolete by 2025.
Argument based on observed acceleration in AI capabilities after 2020 (LLMs, multimodal systems) discussed in the paper; evidence is temporal comparison of the state of capabilities and the applicability of older exposure indices rather than a single empirical re-test of all prior predictions.
Productivity gains from deploying agentic AI may be overstated if alignment costs, monitoring overhead, and coordination inefficiencies are ignored.
Conceptual economic accounting argument; recommends new accounting categories and empirical studies to quantify these factors.
Agentic systems generate tail risks and endogenous systemic correlations (multiple systems converging on similar failure modes), creating new insurability challenges.
Theoretical risk analysis and analogy to systemic risk literature; proposed implications for insurance markets but no empirical testing.
Coordination and control mechanisms (hierarchies, protocols, monitoring) face scalability and specification problems when agents generate unforeseen actions.
Theoretical analysis and examples from multi-agent/organizational theory; no empirical measurement included.
Human cognitive learning processes (calibration, error-correction) may misalign with agentic AIs because humans and AIs learn from different signals and on different horizons.
Conceptual argument supported by cross-disciplinary literature synthesis; empirical tests are proposed but not conducted in the paper.
Relational interaction mechanisms (trust, norms, mutual adjustment) can break down when AI objectives diverge or are opaque, reducing effective teaming.
Argument drawing on human factors and HAT literature; no new experimental data presented.
Agreement on bounded outputs (specifications, short-term goals) is insufficient for maintaining alignment with agentic AI.
Theoretical critique of specification-based alignment approaches; literature on limits of bounded specifications applied to open-ended systems.
Agentic AI undermines key assumptions that shared awareness will reliably stabilize coordinated action over time.
Theoretical argument showing mismatches in representation, timescales, and learning dynamics between humans and agentic AIs; drawn from literature synthesis rather than empirical tests.
Under agentic conditions, alignment cannot be treated as a one-time agreement over bounded outputs; it must be continuously sustained as plans and priorities evolve.
Conceptual argument and modeling in the paper; literature synthesis highlighting limits of specification-based alignment approaches; no empirical validation presented.
Agentic AI creates a new kind of structural uncertainty for human–AI teaming (HAT).
Theoretical/conceptual synthesis across literature on HAT, Team Situation Awareness (Team SA), human factors, multi-agent systems, and AI alignment; no new empirical data.
Firms need complementary investments (data pipelines, monitoring tools, feedback loops, human oversight systems) which materially affect the economics of adoption.
Industry case studies and practitioner reports synthesized in the review describing necessary complementary investments; no quantified investment sample or ROI analysis provided here.
Regulatory attention is likely to focus on transparency, liability for factual errors, data privacy, and nondiscrimination; compliance and auditing will add to adoption costs.
Policy and regulatory analyses aggregated in the review and references to ongoing regulatory discussions; no primary regulatory impact study conducted in this paper.
Generative AI currently lacks genuine empathy and relational capabilities necessary for high-stakes or sensitive interactions.
Conceptual analyses and practitioner case examples aggregated in the review; limited direct quantitative measurement cited in this brief review.
Generative models exhibit contextual misunderstandings and cannot reliably infer nuanced customer intent in all cases.
Synthesis of empirical studies and practitioner observations documenting misinterpretation and intent-detection failures; no new testing reported in this review.
There is substitution risk: routine ideation and drafting tasks may be automated, altering task-level labor demand and wage structure.
Task-automation literature and empirical studies of LLMs performing routine drafting/ideation tasks summarized in the review; no long-run labor-market causality established in the paper.
Generative AI lacks reliable situational judgment on ambiguous problems and on ethical trade-offs, making it insufficient for autonomous decision-making in such contexts.
Case examples and experimental studies cited in the synthesis showing inconsistent or inappropriate responses to ambiguous/ethical scenarios; no large-scale causal evidence provided.
LLMs are prone to bias, mediocrity, and factual or logical errors when domain-specific context or experiential knowledge is absent.
Review of empirical evaluations documenting biased outputs, superficial or mediocre suggestions, and factual errors in open-ended tasks and domain-specific prompts; evidence comes from multiple short-term studies and applied examples.
LLMs are predominantly recombinative — they tend to rework and recombine existing material rather than produce deeply novel insights.
Analytical synthesis of output analyses and creativity assessments from multiple empirical studies demonstrating frequent recombination of existing concepts and lower rates of highly original novelty; studies and measures vary.
Proliferation of low-quality or biased AI-generated ideas creates externalities: increased filtering and reputational costs for firms and risks of poor product designs, ethical lapses, or regulatory violations if evaluation is insufficient.
Case studies and qualitative reports documenting filtering burdens and instances of biased/misleading outputs; theoretical reasoning about reputational and regulatory risks; direct quantification of these externalities is limited.
Standard productivity metrics (e.g., TFP) may undercount the value of ideation and creative augmentation provided by generative AI, making attribution between human and AI contributions difficult.
Methodological discussion in the review supported by heterogeneity in outcome measures across studies and challenges in measuring implemented idea quality and long-run impacts.
Generative models exhibit recombination bias: they tend to remix existing patterns rather than produce deeply original, paradigm-shifting insights.
Synthesis of output analyses across studies showing frequent recombination of known patterns and limited evidence of wholly novel, paradigm-changing ideas; claim based on qualitative and comparative analyses in reviewed literature.
Integration complexity (data access, context continuity, privacy/security, workflow alignment) raises implementation costs and time-to-value.
Deployment case studies and vendor reports documenting engineering effort, data plumbing, compliance work, and multi-month integration timelines; no aggregated cost meta-analysis provided.
Lack of genuine empathy and emotional intelligence undermines performance on complex or emotionally charged interactions.
Qualitative assessments and noisy measurement from pilot studies and customer feedback in complex cases; limited experimental validation and heterogeneous metrics.
Market dominance by global platforms can stifle local entrants and distort competition; policies should address market power and data monopolies.
Review of platform economics and competition policy literature; policy argumentation rather than new empirical competition analysis in this paper.
If local data ownership, capacity and governance are weak, economic gains from AI risk accruing to foreign firms and exacerbating income and wealth concentration.
Conceptual synthesis referencing empirical studies on platform rents and data monetization; no original economic distribution analysis presented.
AI and automation can displace labour—particularly routine tasks—heightening the need for retraining, active labour policies and social protection.
Review of literature on automation and labour markets combined with normative inference for African contexts; no primary labour market data presented.
AI adoption raises a risk of digital colonialism: foreign control of data, platforms, and value capture may divert economic gains away from local actors.
Conceptual analysis drawing on policy documents and empirical literature about data flows, platform economics, and international investment; no original quantitative measurement in this paper.
Over-standardisation of curricula can create mismatches between certified competencies and firm-specific needs.
Stated in Risks: the paper warns that overly standardized curricula may not fit firm-specific requirements. This is a conceptual caution, not supported by within-paper empirical comparisons.
High fixed costs may concentrate training capacity among a few providers, risking reduced competition.
Listed under Risks to Watch: the paper warns that high fixed costs could concentrate capacity. This is a theoretical market-concentration risk; no empirical market analysis is provided.
Upfront and maintenance costs are substantial; economic evaluation should compare these costs to downstream benefits such as placement rates and productivity gains.
Paper recommends economic evaluation, lists cost-per-curriculum and other cost metrics; presented as advice rather than results. No empirical cost–benefit data provided.
Complexity and lock-in to specific standards may raise barriers to innovation and increase switching costs.
Discussed in Regulation and compliance economics and Risks: claims that standardisation and embedded processes could produce vendor/standard lock-in. This is a theoretical risk flagged by the authors, not supported by empirical data in the paper.
Upfront governance costs (policy, tooling, staff) become a key part of adoption cost and affect ROI calculations and payback periods for automation investments.
Economic reasoning and implications discussed in the paper; no empirical cost data provided—recommendation based on practitioner experience and theoretical cost accounting.
Traditional automation governance is often ad hoc, underestimates security and compliance risks, and does not scale safely for mission-critical enterprise systems.
Synthesis of industry best practices and practitioner-sourced lessons (qualitative observations and case illustrations). No systematic survey or quantitative incidence rates provided.
Prompt fraud reduces the marginal cost of producing convincing fraudulent artifacts, which may increase fraud frequency and expected losses absent mitigations.
Economic reasoning and conceptual modeling of incentives; no empirical estimates of frequency or losses included.
Lack of prompt provenance, versioning, and validation practices increases organizational exposure to prompt fraud.
Conceptual analysis and recommended controls (provenance/versioning) drawn from audit-framework comparisons and threat modeling.
There is insufficient logging/traceability of prompts, responses, and model versions in many workflows, creating a control weakness for detecting prompt fraud.
Observations from literature/regulatory review and the paper's threat/control mapping; asserted as a common operational gap (no systematic measurement).
Shadow AI — unsanctioned, decentralized use of GenAI tools — amplifies prompt-fraud risk by bypassing central controls and audit trails.
Conceptual analysis and organizational risk reasoning; references to common practices of unsanctioned tool use (no empirical prevalence data).
External actors can commit prompt fraud via customer-facing systems or social-engineering prompt chains.
Conceptual threat scenarios and mapping of attack surfaces (customer-facing interfaces, input channels); illustrative examples provided.
Internal actors manipulating prompts within authorized AI workflows are a realistic and important threat vector for prompt fraud.
Threat modeling and scenario-based analysis highlighting insiders with authorized access who can craft prompts.
Prompt fraud can defeat controls that rely on plausibility, standard formatting, or human review that trusts model-like language.
Threat mapping and literature on automation bias; illustrative vignettes showing how machine-like outputs mimic authoritative formats.
Prompt fraud lowers the entry cost of producing convincing fraudulent artifacts, increasing the ease with which attackers can create plausible forgeries.
Economic reasoning and conceptual analysis based on GenAI behavior and illustrative scenarios (no empirical cost or frequency data).
Prompt fraud — the intentional manipulation of natural-language prompts to cause generative AI systems to produce misleading, fabricated, or deceptive artifacts that bypass internal controls — constitutes a novel, low-cost fraud vector that traditional IT- and process-focused controls are ill-equipped to detect or prevent.
Conceptual analysis and threat modeling grounded in literature/regulatory review and illustrative vignettes; no systematic empirical incidence data provided.