Evidence (2480 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Labor Markets
Remove filter
Differential adoption across firms (due to modular, scalable designs and data advantages) may create winner‑takes‑most effects and increase market concentration, benefiting early adopters with rich data/integration capabilities.
Market-structure claim supported by economic reasoning about scale and data advantages; no cross-firm empirical adoption study or market concentration time‑series is provided.
Initial investment, integration, and ongoing maintenance/compliance costs can be substantial and affect short-term ROI.
Interviewed administrators and implementation reports citing upfront and recurring costs (integration, model maintenance, compliance); quantitative budget figures not standardized across sites in the paper.
Risk of deskilling or reduced empathy if human roles are overly automated.
Thematic analysis of staff interviews and surveys reporting concerns about loss of practice, reduced patient contact, and potential diminishment of empathetic skills; no longitudinal measures of skill loss presented.
Technical and organizational integration with legacy hospital IT systems is nontrivial.
Implementation reports and interviews describing integration work, time, and resource needs; descriptive accounts of technical and organizational barriers (no universal timelines/costs reported).
Algorithmic bias in NLP models can misclassify complaints from underrepresented groups.
Observations from system classification error analyses (disparities reported by demographic group) and corroborating qualitative concerns from staff and administrators; specific subgroup sample sizes and effect magnitudes not provided.
Data privacy and security risks arise from centralizing complaint text and metadata.
Stakeholder interviews, thematic coding of concerns, and risk assessment commentary based on centralized logs and metadata aggregation; no measured breach incidents reported here.
Feedback effects from physical capital and labor onto AI capital are weak, with only weak negative feedback observed (physical capital → AI and labor → AI small/weakly negative coefficients).
Estimated interaction coefficients from the 2016–2023 calibration showing small-magnitude, negative feedback terms from physical capital and labor onto AI.
Introducing ‘agent capital’ (AI that lowers coordination costs) reduces coordination costs inside firms (coordination compression).
Definition and central assumption of the paper's formal task-based model; analytical setup assumes agent capital parametrically reduces coordination frictions.
Extractive industries often deliver limited local employment and mainly generate rents rather than broad employment or skill spillovers.
Review of empirical studies and case evidence showing extractive FDI tends toward enclave production with low local hiring and limited upstream/downstream linkages; coverage varies by country and project.
FDI may increase within‑country wage inequality, especially when concentrated in extractive sectors or low‑skill activities.
Cross-study empirical results and theoretical arguments summarized in the review showing wage premia accruing to skilled workers and enclave effects in extractives; underlying studies vary in location, methods, and samples.
FDI may deepen labor market dualism: creating formal, higher‑paying jobs for a minority while many remain in precarious, low‑pay informal work.
Literature synthesis pointing to patterns where foreign investment produces enclave formal jobs while broader labor markets remain informal or precarious; evidence drawn from firm- and sector-level studies cited in the review.
A one standard-deviation increase in AI adoption lowers wages in the middle income quintile by 1.4%.
Panel of 38 OECD countries, 2019–2025; wage outcomes by income quintile using the AI Adoption Index and IV estimation; robustness checks reported.
Unresolved liability and regulatory uncertainty increase malpractice risk and insurance costs, leading insurers and providers to favor conservative adoption and continued human-in-the-loop safeguards.
Regulatory/legal analysis and stakeholder behavior models discussed in the review; observed cautious deployment patterns in practice noted in the literature.
Regulatory pathways and approval standards are evolving but are not yet aligned with deployment of high-autonomy clinical systems.
Review of recent policy analyses and regulatory documents showing ongoing updates and gaps between current standards and requirements for high-autonomy AI deployment.
Insufficient regulation increases risks of negative externalities (privacy harms, biased hiring/management) that can reduce labor supply attachment or lower human capital investments.
Theoretical reasoning and synthesis of documented case studies and reports referenced in the commentary; not supported by new causal empirical analysis in the paper.
Absent strong worker voice or mandated impact assessments, AI-driven surveillance, algorithmic management and task reallocation are more likely, increasing risks of deskilling, displacement, and discriminatory outcomes.
Policy synthesis identifying plausible channels from AI system use to worker harms; supported by case-study reports in the symposium but no systematic empirical quantification in this commentary.
Weakening of organized labor and stalled worker-protection legislation raises the probability that AI adoption will increase employer bargaining power, potentially depressing wages and worsening job quality for affected occupations.
Analytic inference from labor economics theory and policy review; commentary does not present causal microdata linking AI adoption to wage or job-quality outcomes.
Export controls may constrain access to advanced models and hardware, affecting productivity gains unevenly across firms and sectors.
Policy analysis of current export control instruments and their potential economic effects; no firm- or sector-level quantitative analysis presented.
A conservative Supreme Court majority increases the risk of rulings that could further constrain organized labor and weaken labor’s power to negotiate AI-related workplace rules.
Legal analysis connecting Supreme Court composition and recent jurisprudence to possible effects on labor law and collective bargaining; predictive inference rather than empirical testing.
The incoming second Trump administration is dismantling many Biden-era worker-protection initiatives (notably rescinding or undercutting the Biden Executive Order intended to hold employers accountable for AI impacts).
Policy/legal analysis referencing recent executive actions and reported rollbacks of Biden-era frameworks; synthesis of documents and news/administrative actions reviewed in the commentary; no original empirical sample.
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
Accumulated latent defects from unchecked AI outputs create negative externalities across dependent systems, complicating pricing and insurance; liability and cyber insurance markets may need to adapt.
Policy and economics argumentation drawing on externality theory; no actuarial or insurance-market empirical analysis provided.
Measured productivity gains from AI-assisted development may overstate welfare gains if verification costs, defect externalities, and long-run fragility are omitted from accounting.
Economic reasoning and accounting argument; no empirical accounting studies or welfare analyses presented.
The harm from latent defects is diffuse and slow-moving, making it easy for decision-makers to underweight these risks in adoption choices.
Descriptive argument drawing on behavioral economics concepts (discounting, salience); no empirical decision-making data included.
Small, unverified changes accumulate over time into system-level fragility, hidden bugs, and security vulnerabilities (latent risk accumulation).
Causal reasoning and illustrative examples; no longitudinal empirical measurement of defect accumulation presented.
AI-assisted code generation produces a throughput asymmetry: generation capacity rises much faster than human or automated verification capacity.
Synthesis of conceptual arguments and illustrative scenarios; no quantitative empirical evidence or sample-based analysis included in the paper.
Verification (human review, testing, security analysis) does not scale at the same rate as AI-assisted generation and becomes the bottleneck.
Mechanism reasoning and qualitative argumentation; illustrative examples showing mismatch between generation and verification capacity. No empirical scaling measurements provided.
Differences in access to AI tools and digital infrastructure could exacerbate global and within-country inequalities in research capacity and outputs.
Statement in Distributional and Competitive Effects. Motivated by observed heterogeneity in infrastructure and access; abstract does not provide empirical heterogeneity estimates or samples.
Institutions that adopt and integrate AI effectively may gain disproportionate advantages, increasing stratification in academic prestige and funding.
Presented as a distributional/competitive implication. Based on theory and possibly institutional case studies; no causal evidence or quantitative estimates provided in the abstract.
Overreliance on generative AI risks eroding worker critical thinking and loss of tacit expertise.
Conceptual arguments supported by observational reports and theoretical concerns in the literature synthesis; limited empirical evidence cited.