Evidence (1286 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Inequality
Remove filter
AI tools embedded in everyday research infrastructures intensify — rather than reduce — ethical accountability burdens: they constrain researcher autonomy and undermine data sovereignty, especially in cross‑national settings where legal protections are fragmented or weaker.
Qualitative case study centered on environmental science research in Chile that uses GDPR as a normative framework; methods reported include interviews, observation, and mapping of data flows and third‑party dependencies (sample sizes not reported).
Insufficient regulation increases risks of negative externalities (privacy harms, biased hiring/management) that can reduce labor supply attachment or lower human capital investments.
Theoretical reasoning and synthesis of documented case studies and reports referenced in the commentary; not supported by new causal empirical analysis in the paper.
Absent strong worker voice or mandated impact assessments, AI-driven surveillance, algorithmic management and task reallocation are more likely, increasing risks of deskilling, displacement, and discriminatory outcomes.
Policy synthesis identifying plausible channels from AI system use to worker harms; supported by case-study reports in the symposium but no systematic empirical quantification in this commentary.
Weakening of organized labor and stalled worker-protection legislation raises the probability that AI adoption will increase employer bargaining power, potentially depressing wages and worsening job quality for affected occupations.
Analytic inference from labor economics theory and policy review; commentary does not present causal microdata linking AI adoption to wage or job-quality outcomes.
Export controls may constrain access to advanced models and hardware, affecting productivity gains unevenly across firms and sectors.
Policy analysis of current export control instruments and their potential economic effects; no firm- or sector-level quantitative analysis presented.
A conservative Supreme Court majority increases the risk of rulings that could further constrain organized labor and weaken labor’s power to negotiate AI-related workplace rules.
Legal analysis connecting Supreme Court composition and recent jurisprudence to possible effects on labor law and collective bargaining; predictive inference rather than empirical testing.
The incoming second Trump administration is dismantling many Biden-era worker-protection initiatives (notably rescinding or undercutting the Biden Executive Order intended to hold employers accountable for AI impacts).
Policy/legal analysis referencing recent executive actions and reported rollbacks of Biden-era frameworks; synthesis of documents and news/administrative actions reviewed in the commentary; no original empirical sample.
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
Existing extrapolation‑based projection systems understate AI’s nonlinear, spillover, and augmentation effects and miss differential impacts across occupations, industries, regions, and demographic groups.
Theoretical argument and literature-based reasoning in the paper; no quantitative demonstration comparing extrapolation systems to the proposed approach.
Traditional BLS projection methods are insufficient for forecasting labor market changes driven by rapid AI adoption.
Conceptual critique and argumentation in the paper; no empirical evaluation or comparative forecast error statistics provided.
Conversely, lack of standards or failed validation can create regulatory setbacks, reputational risk, and stranded R&D spending.
Case reports and regulatory analysis in the narrative review describing negative outcomes from failed validation or non-aligned AI tools (qualitative evidence).
Market dominance by global platforms can stifle local entrants and distort competition; policies should address market power and data monopolies.
Review of platform economics and competition policy literature; policy argumentation rather than new empirical competition analysis in this paper.
If local data ownership, capacity and governance are weak, economic gains from AI risk accruing to foreign firms and exacerbating income and wealth concentration.
Conceptual synthesis referencing empirical studies on platform rents and data monetization; no original economic distribution analysis presented.
AI and automation can displace labour—particularly routine tasks—heightening the need for retraining, active labour policies and social protection.
Review of literature on automation and labour markets combined with normative inference for African contexts; no primary labour market data presented.
AI adoption raises a risk of digital colonialism: foreign control of data, platforms, and value capture may divert economic gains away from local actors.
Conceptual analysis drawing on policy documents and empirical literature about data flows, platform economics, and international investment; no original quantitative measurement in this paper.
Biased training data or objective functions in AI models could perpetuate gender disparities by offering different products or risk scores to men and women.
Review of AI fairness literature and examples of algorithmic disparate impacts summarized in the paper (conceptual and case evidence; not an empirical test tied specifically to fintech products in the review).
AI systems trained on incomplete, adult-centric, or high-income–biased data risk perpetuating inequities in prediction, resource allocation, and policy recommendations for children and LMICs.
Data-justice and algorithmic fairness literature cited conceptually in the review; applies generalizable concerns about biased training data to the One Health/child-health context without empirical bias audits in this paper.
Data gaps, especially child-specific and cross-sectoral One Health data, reduce the reliability and fairness of AI-driven disease prediction and economic models.
Methodological argument grounded in the review of data availability; authors connect observed surveillance gaps to model limitations—no empirical model performance analyses presented.
Fragmented governance and funding structures hinder cross-sectoral prevention and response for child-centered One Health challenges.
Policy analyses and governance literature synthesized in the review; narrative evidence of siloed funding and governance limiting cross-sector action (no quantitative governance metrics provided).
Integrated One Health research and policy implementation are limited—particularly in LMICs—creating gaps in prevention and response for children.
Policy, programmatic, and academic literature reviewed; authors note under-representation of LMIC contexts and limited cross-sectoral integration in the published literature and surveillance systems.
Geographic ranges of many vectors and zoonoses are shifting (due to climate and land-use change), increasing children's exposure in new areas.
Ecological and epidemiological modeling studies and surveillance trends cited in the review indicating range shifts for some vectors/zoonoses; evidence is region- and agent-specific and heterogeneously reported.
Extreme weather events amplify children's exposure to pathogens and degrade health infrastructure and services.
Disaster and public-health case studies and surveillance reports summarized in the review documenting post-event increases in infectious disease exposure and disruptions to services; narrative evidence, context-dependent.
Climate change intensifies direct harms to children (heat injury, extreme weather injury) and indirect harms (food insecurity, mental health impacts, shifting disease ecologies).
Climate-health literature and sectoral reports synthesized; references to observational studies and modeling showing associations between climate events and the listed harms (no pooled effect sizes).
Pediatric and neonatal AMR pose distinct clinical and surveillance challenges compared to adult AMR.
Clinical literature and surveillance reports synthesized in the review highlighting differences in pathogen spectra, dosing, diagnostics, and reporting for pediatric/neonatal populations; narrative description without quantitative synthesis.
Children are disproportionately exposed to antimicrobial-resistant pathogens via clinical care, community transmission, food chains, and environmental contamination.
Synthesis of clinical studies, community surveillance reports, food-safety literature, and environmental microbiology studies; review notes pediatric and environmental sources but provides no pooled prevalence estimates.
Children's dependence on caregivers and local ecosystems (for nutrition, shelter, sanitation) increases vulnerability to ecosystem-level shocks.
Social and public-health literature integrated in the review describing caregiver-mediated dependence and ecosystem service reliance; qualitative and observational evidence rather than quantitative pooled estimates.
Children are uniquely vulnerable within the One Health nexus because physiological immaturity, developmental sensitivity, behavior-driven exposures, and ecosystem dependence make them disproportionately affected by AMR, climate change, and emerging zoonotic/vector-borne infections.
Narrative synthesis of interdisciplinary peer-reviewed studies, surveillance reports, and policy literature; biological and epidemiological reasoning rather than a pooled quantitative analysis; heterogeneous and cross-disciplinary evidence summarized by the authors.
Holding schools liable under federal civil‑rights statutes is sometimes possible but often insufficient to prevent or remediate harms caused by EdTech products.
Policy argumentation and doctrinal analysis with hypotheticals and illustrative cases demonstrating enforcement limitations when only schools are targeted (no empirical prevalence data).
Regulatory frameworks often lack tools for algorithmic accountability, data portability, and cross-border enforcement for platformed services.
Policy and regulatory studies reviewed in the paper; assessment based on gap analysis rather than new regulatory audit data.
Algorithmic bias—stemming from training data, feature selection, or proxy variables—can produce systematic discrimination (for example, gendered access to credit).
Reviewed empirical and methodological studies on algorithmic fairness; paper cites documented instances and outlines mechanisms but does not present original audit data.
Data asymmetry and differential digital footprints create information advantages for platforms and reinforce borrower segmentation.
Theoretical argument supported by literature on data externalities and platform information advantages; illustrated with case examples rather than new data analysis.
Differential digital literacy, device/infrastructure access, and biased data-driven decision rules can exclude or disadvantage groups.
Conceptual synthesis and references to documented cases of digital divides and algorithmic bias in existing literature; no new empirical measurement provided.
Without deliberate governance, platformization can amplify exclusion through data asymmetries, algorithmic bias, gendered barriers, infrastructure gaps, and market concentration.
Literature synthesis and illustrative examples of platform dynamics and algorithmic decision rules; no systematic causal estimates in the paper.
FinTech simultaneously creates new structural inequalities and systemic risks.
Argumentative synthesis of theoretical and empirical work across development finance and regulatory studies; illustrative case examples referenced (e.g., platform market effects and algorithmic decision-making).