Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Transparency about AI environmental impacts has declined even as deployments of generative models have accelerated, creating an information gap for regulators, users, and researchers.
Trend observations from collated operational datasets and cited empirical studies indicating reduced disclosure by providers alongside increased deployments; supported by regulatory mapping noting scant AI-specific reporting outside the EU.
The larger cumulative environmental impacts of these generative models are primarily driven by inference-phase (online serving) energy consumption rather than training-phase emissions.
Evidence synthesis and operational data analysis focusing on deployment/inference patterns and relative contribution of lifecycle phases in examined models.
Generative web-search and reasoning AI models deployed widely in 2025 impose substantially higher cumulative environmental costs than earlier AI generations, largely driven by inference at scale.
Evidence synthesis: collation of empirical studies and operational data comparing energy and emissions profiles of 2025-era model families and deployment patterns (paper-wide comparative accounting).
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
Fidelity-related biases risk concentrating harms among underrepresented populations, potentially increasing healthcare costs and welfare losses; economic evaluation and auditing for distributional impacts should be integrated into procurement and reimbursement decisions.
Interdisciplinary evidence synthesized from ML fairness studies, clinical validation reports, and health economics literature included in the review; the review recommends integrating distributional analysis in HTA based on documented risks of differential model performance.
Weak or absent regulation increases uncertainty and may deter investment or lead to adoption of low-quality synthetic products with negative economic and clinical externalities.
Policy literature and implementation case studies summarized in the review that link regulatory gaps to investment risk and potential for low-quality product adoption; evidence is mostly inferential and descriptive.
Without improvements in fidelity and domain adaptation, synthetic data risks introducing bias and limiting clinical and economic benefits.
Integrated assessment from machine-learning evaluations, clinical validation studies, and implementation analyses within the review which link fidelity and domain mismatch to biased model outputs and reduced clinical utility; economic implications are inferred from cost-effectiveness and procurement literature cited in the review.
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
Key constraints on realized gains include governance complexity, model reliability limits (errors, brittleness, distribution shifts), orchestration challenges integrating agents across systems, and ongoing need for human oversight for safety, fairness, and quality control.
Qualitative observations and limitations reported from the Alfred AI deployments and authors' analysis of operational experience; evidence comes from live deployments but is descriptive rather than quantitative.
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
Accumulated latent defects from unchecked AI outputs create negative externalities across dependent systems, complicating pricing and insurance; liability and cyber insurance markets may need to adapt.
Policy and economics argumentation drawing on externality theory; no actuarial or insurance-market empirical analysis provided.
Measured productivity gains from AI-assisted development may overstate welfare gains if verification costs, defect externalities, and long-run fragility are omitted from accounting.
Economic reasoning and accounting argument; no empirical accounting studies or welfare analyses presented.
The harm from latent defects is diffuse and slow-moving, making it easy for decision-makers to underweight these risks in adoption choices.
Descriptive argument drawing on behavioral economics concepts (discounting, salience); no empirical decision-making data included.
Small, unverified changes accumulate over time into system-level fragility, hidden bugs, and security vulnerabilities (latent risk accumulation).
Causal reasoning and illustrative examples; no longitudinal empirical measurement of defect accumulation presented.
AI-assisted code generation produces a throughput asymmetry: generation capacity rises much faster than human or automated verification capacity.
Synthesis of conceptual arguments and illustrative scenarios; no quantitative empirical evidence or sample-based analysis included in the paper.
Verification (human review, testing, security analysis) does not scale at the same rate as AI-assisted generation and becomes the bottleneck.
Mechanism reasoning and qualitative argumentation; illustrative examples showing mismatch between generation and verification capacity. No empirical scaling measurements provided.
Overreliance on generative AI risks eroding worker critical thinking and loss of tacit expertise.
Conceptual arguments supported by observational reports and theoretical concerns in the literature synthesis; limited empirical evidence cited.
Security vulnerabilities and IP leakage create negative externalities; absent internalization, social costs (breaches, legal disputes) may rise.
Security analyses, documented incidents, and economic externality reasoning synthesized from the literature; empirical quantification of social cost is limited.
Generated code may incidentally reproduce copyrighted or licensed snippets from training data.
Analyses detecting verbatim or near-verbatim reproductions of licensed/copyrighted code in model outputs in selected tests and audits; evidence heterogeneous and depends on prompts and model/data.
Outputs often lack deep, project-level contextual reasoning (e.g., design tradeoffs, architecture constraints).
Qualitative failure-mode analyses, user studies, and benchmark tasks showing limitations in system-level reasoning and context-aware design decisions; evidence from short-horizon labs and case studies.
There is a risk of shallow learning if learners over-rely on AI outputs without understanding fundamentals.
Educational studies and observational analyses indicating reduced engagement with underlying concepts for some learners using AI assistance, plus qualitative reports from instructors; studies often short-term.
There is a significant political-economy risk that dominant states or firms (an "AI superpower" veto) could block or undermine coordination on token taxes.
Political-economy discussion identifying veto risks and possible deterrent mechanisms; conceptual argumentation without empirical probability estimates.
FLOP taxes face measurement, enforceability, and leakage challenges and tax inputs rather than where value is realized.
Comparative critique presented in the paper; conceptual analysis without empirical measurement of FLOP-tax implementations.
Conversely, lack of standards or failed validation can create regulatory setbacks, reputational risk, and stranded R&D spending.
Case reports and regulatory analysis in the narrative review describing negative outcomes from failed validation or non-aligned AI tools (qualitative evidence).
Productivity gains from deploying agentic AI may be overstated if alignment costs, monitoring overhead, and coordination inefficiencies are ignored.
Conceptual economic accounting argument; recommends new accounting categories and empirical studies to quantify these factors.
Agentic systems generate tail risks and endogenous systemic correlations (multiple systems converging on similar failure modes), creating new insurability challenges.
Theoretical risk analysis and analogy to systemic risk literature; proposed implications for insurance markets but no empirical testing.
Coordination and control mechanisms (hierarchies, protocols, monitoring) face scalability and specification problems when agents generate unforeseen actions.
Theoretical analysis and examples from multi-agent/organizational theory; no empirical measurement included.