Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
The memo departs from the Department's prior lifecycle-assurance framework and substitutes different standards while elevating vague criteria (e.g., 'model objectivity') without operational definitions or evaluation methods.
Primary source comparison: close reading of the January 2026 memo versus prior DoD lifecycle-assurance documents; identification of new/changed terminology and lack of accompanying operational definitions or test methods in the policy text.
By centralizing waiver decisions in a Barrier Removal Board, the memo converts baseline governance controls into exception-driven permissions (i.e., governance becomes something to be suspended rather than enforced).
Qualitative institutional analysis and primary-source reading of the memo establishing a centralized waiver process; mapping of how waiver mechanisms interact with existing assurance processes (ATO, T&E, contracting). No quantitative measurement of waiver frequency provided.
The memo explicitly frames governance and procurement speed as a zero-sum tradeoff and labels long-standing oversight mechanisms (Authorities to Operate, test & evaluation, contracting reviews) as 'blockers' eligible for waiver.
Primary source analysis: textual interpretation of the memo and accompanying contracting directives that characterize oversight mechanisms as impediments and make them eligible for waiver. Evidence is documentary (policy text); no quantitative sample.
Regulatory fragmentation increases compliance costs and stifles cross-border scale economies; international coordination and mutual recognition of standards can lower trade costs.
Comparative governance analysis and economic reasoning about cross-border trade and compliance; no cross-country causal estimates provided in the report.
Large incumbents with data/network advantages may entrench market power.
Policy and literature review noting data/network effects, observed tendencies in tech markets; sectoral examples discussed in the report.
Without targeted policy, AI can amplify winner-take-all dynamics (market concentration, superstar firms) and spatial inequalities (urban vs. rural).
Theoretical economic arguments and review of literature on data/network effects and concentration; comparative policy analysis that raises distributional concerns.
Without international coordination, providers may relocate compute or obscure compute locations to avoid stricter regimes; harmonized rules reduce these distortions.
Regulatory mapping and economic reasoning about geographic investment, regulatory arbitrage, and compute-location disclosure incentives.
Compliance and reporting requirements will impose additional costs on firms, with small providers likely disproportionately affected unless rules are proportionate.
Policy analysis of compliance and transaction costs (qualitative assessment of administrative burden and scale effects).
The facility-level focus and training-phase emphasis of current governance limit regulators' ability to monitor and mitigate the full environmental externalities of modern AI systems.
Synthesis of empirical findings on model/inference impacts combined with regulatory mapping showing gaps between impact locus and regulatory reach.
Transparency about AI environmental impacts has declined even as deployments of generative models have accelerated, creating an information gap for regulators, users, and researchers.
Trend observations from collated operational datasets and cited empirical studies indicating reduced disclosure by providers alongside increased deployments; supported by regulatory mapping noting scant AI-specific reporting outside the EU.
The larger cumulative environmental impacts of these generative models are primarily driven by inference-phase (online serving) energy consumption rather than training-phase emissions.
Evidence synthesis and operational data analysis focusing on deployment/inference patterns and relative contribution of lifecycle phases in examined models.
Generative web-search and reasoning AI models deployed widely in 2025 impose substantially higher cumulative environmental costs than earlier AI generations, largely driven by inference at scale.
Evidence synthesis: collation of empirical studies and operational data comparing energy and emissions profiles of 2025-era model families and deployment patterns (paper-wide comparative accounting).
There is a persistent gap between policy intent (promises of ethical protection and economic opportunity) and lived experience, producing new forms of social exposure—especially for vulnerable groups.
Synthesis of qualitative findings from documents, ethics guidelines, industry statements, and stakeholder commentary indicating aspirational policy language contrasted with limited enforceable protections; specific lived-experience case data are not provided.
Lack of enforceable data-rights and accountability mechanisms strengthens incumbent platforms’ control over data markets, potentially reducing competition and hindering entry by smaller firms.
Qualitative review of regulatory texts and industry positioning showing limited enforceable data-rights provisions; theoretical market-structure inference without empirical market-share analysis.
Weak or non‑enforceable rules create conditions for negative externalities (data exploitation, discriminatory automation) that markets alone may not correct.
Argumentative synthesis from document analysis and theoretical framing (communication rights, market-failure logic); supported by examples in policy and industry discourse but not by empirical market-level measurement in the paper.
The dominant framing privileges economic imaginaries of competitiveness and development over communication rights, producing regulatory blind spots and reinforcing existing inequalities.
Interpretive analysis using communication-rights theory and SCOT applied to policy and industry discourse; comparison of economic-oriented language versus rights-oriented provisions in reviewed documents.
Regulatory attention typically overlooks vulnerable and marginalized populations (low-wage workers, women, rural communities), whose mobile communication practices and data are disproportionately exposed to harm.
Document-based qualitative analysis identifying patterns of inclusion/exclusion in regulatory texts and public debate; stakeholder commentary reviewed indicates limited consideration of these groups. (Sample count not provided.)
Indonesia’s governance of mobile-AI rests largely on soft‑law, aspirational instruments (guidelines, non‑binding ethics codes), which limits enforceability and accountability.
Qualitative discourse- and document-based analysis of key policy documents, national ethics guidelines, industry statements, and public stakeholder commentary related to mobile-AI in Indonesia. (The paper identifies dominant use of non‑binding instruments; exact number of documents reviewed is not specified.)
Fidelity-related biases risk concentrating harms among underrepresented populations, potentially increasing healthcare costs and welfare losses; economic evaluation and auditing for distributional impacts should be integrated into procurement and reimbursement decisions.
Interdisciplinary evidence synthesized from ML fairness studies, clinical validation reports, and health economics literature included in the review; the review recommends integrating distributional analysis in HTA based on documented risks of differential model performance.
Weak or absent regulation increases uncertainty and may deter investment or lead to adoption of low-quality synthetic products with negative economic and clinical externalities.
Policy literature and implementation case studies summarized in the review that link regulatory gaps to investment risk and potential for low-quality product adoption; evidence is mostly inferential and descriptive.
Without improvements in fidelity and domain adaptation, synthetic data risks introducing bias and limiting clinical and economic benefits.
Integrated assessment from machine-learning evaluations, clinical validation studies, and implementation analyses within the review which link fidelity and domain mismatch to biased model outputs and reduced clinical utility; economic implications are inferred from cost-effectiveness and procurement literature cited in the review.
There is evidence of problematic patterns in automated decision appeals and workflow interactions when AI is integrated into clinical processes.
Case studies, deployment reports, and observational analyses cited in the synthesis that document increased appeals, workflow friction, or unexpected interactions caused by automation.
Failing to retrain health workers for AI will produce structural labor-market mismatches, slow adoption, and reduce realized economic benefits.
Labor-market analysis and workforce readiness findings from the narrative synthesis and Delphi inputs; argument is inferential based on observed skill gaps and adoption barriers in the reviewed literature.
Indonesia risks technological dependency on foreign vendors if domestic capability, data governance, and procurement are not strengthened.
Market and policy assessment from the review, including procurement analyses and discussion in supplementary national reports and Delphi studies; based on observed market structures and procurement practices identified in the literature.
Approximately 58.7% of the relevant Indonesian health workforce lacks the AI competence or literacy needed for safe, scalable adoption.
Workforce readiness estimate derived from reviewed workforce assessments, Delphi consensus studies, and national reports included in the narrative synthesis; the summary does not specify sample frames or exact survey instruments that produced the 58.7% figure.
Indonesia’s AI healthcare maturity score is approximately 52/100, trailing regional peers (example comparators: Singapore ≈ 92, Malaysia ≈ 78).
Benchmarking performed in the review against regional maturity catalogues and international standards (EU AI Act, Singapore, Australia); maturity scoring method referenced in the paper but detailed scoring rubric and underlying metrics not fully reproduced in the summary.
Widespread adoption of LLMs without adequate verification increases systemic cybersecurity risks with potential economic spillovers.
Synthesis of security incident case studies and risk analyses revealing vulnerabilities in generated code and potential downstream impacts.
Models lack deep contextual reasoning and may fail on tasks requiring long-term design thinking or deep domain knowledge.
Benchmark failures and user studies in the reviewed literature demonstrating degraded performance on complex architectural/design tasks and domain-specific reasoning problems.
Use of these tools can mask gaps in foundational computational skills among novices.
Pedagogical case studies and assessments indicating reliance on AI can produce superficial solutions and lower demonstrated understanding of core concepts.
Negative externalities from synthetic media (misinformation, reputational harm, verification costs) may justify public interventions such as provenance standards, mandatory labeling, penalties for malicious misuse, and public investment in verification infrastructure.
Policy analysis and normative recommendations based on identified externalities in the reviewed literature; no empirical policy evaluation in paper.
Compliance with IP, privacy and liability regimes will impose costs (monitoring, licensing, disclosure) that may raise barriers for smaller entrants and affect prices and diffusion of generative audiovisual models.
Regulatory and economic literature synthesized in the narrative review; policy/legal case citations included but no new cost estimates provided.
Proliferation of generated content may increase information supply but lower per-item attention and willingness-to-pay, potentially reducing monetization unless intermediaries solve discoverability and trust issues.
Theoretical arguments using attention-economy literature and secondary studies; narrative reasoning without new empirical quantification.
Platforms and firms that control model training data and deployment infrastructure will gain strategic advantage, increasing risks of vertical integration and market concentration.
Market-structure and firm-strategy analysis drawn from secondary literature and conceptual arguments in the paper.
Information-quality externalities from misinformation and reduced trust impose social costs that are not internalized by producers, justifying policy interventions such as liability rules or provenance standards.
Theoretical externality reasoning and policy literature reviewed; no social-welfare empirical quantification included in the paper.
Economies of scale, data-driven advantages, and compute costs may concentrate market power in a few platforms or studios, raising entry barriers.
Market-structure reasoning and referenced industry analyses in the literature review; no empirical market-concentration metrics computed in the paper.
Cross-border enforcement difficulties and divergent national rules produce legal fragmentation in regulation and judiciary responses to generative audiovisual AI.
Comparative review of international statutes and judicial approaches included in the paper; qualitative legal analysis rather than empirical cross-jurisdictional enforcement metrics.
Process-stage risks include concentration of capabilities among a few platforms/actors and deficits in control, governance and transparency (e.g., limited explainability and restricted model access).
Policy and market-structure literature reviewed; descriptive evidence of platform concentration cited qualitatively but no original market-share analysis or sample sizes.
Key data challenges in African contexts are measurement error, censoring, selection bias (informal actors absent from official datasets), privacy/ethical concerns, and limited digital trace coverage in some regions.
Methodological critique synthesised from literature in the paper.
Short-term AI adoption costs and adjustment reduce firm profits during early adoption phases.
Theoretical model predictions from the differentiated Bertrand framework; empirical component claims alignment with these short-run effects (no sample size or estimation details given in summary).
This generation–verification mismatch produces a chronic bottleneck in development processes.
Analytic diagnosis and behavioral reasoning in the paper (design principles and system analysis); no empirical testing or simulation results provided.
AI-assisted software development creates a persistent structural imbalance: generation throughput (machine-produced code, tests, docs) outpaces human verification capacity.
Conceptual/theoretical argument and systems/architectural modeling in the paper; no empirical measurement, no sample size, no field data reported.
Data‑driven agritech platforms exhibit network effects and potential for market power, implying a policy need for data portability and interoperability to preserve competition.
Economic reasoning, policy reports, and case study examples summarized in the review; the claim is grounded in market analysis rather than large‑scale causal studies.
If left unregulated and untargeted, AI and digital agritech platforms risk concentrating surplus with technology providers and capital owners, potentially increasing rural inequality and weakening smallholder bargaining power.
Theoretical market‑structure analysis, case studies of platform markets, and policy analyses cited in the paper; empirical causal evidence on long‑run distributional effects is limited.
Data ownership, lack of interoperability, privacy concerns, and concentration of digital agritech platforms create risks for competition and equitable value capture in agricultural value chains.
Policy reports, market analyses, and case studies discussed in the paper; the claim is supported by descriptive evidence and theoretical assessments rather than large causal estimates.
Differences in access to AI tools and digital infrastructure could exacerbate global and within-country inequalities in research capacity and outputs.
Statement in Distributional and Competitive Effects. Motivated by observed heterogeneity in infrastructure and access; abstract does not provide empirical heterogeneity estimates or samples.
Institutions that adopt and integrate AI effectively may gain disproportionate advantages, increasing stratification in academic prestige and funding.
Presented as a distributional/competitive implication. Based on theory and possibly institutional case studies; no causal evidence or quantitative estimates provided in the abstract.
Security vulnerabilities and IP leakage create negative externalities; absent internalization, social costs (breaches, legal disputes) may rise.
Security analyses, documented incidents, and economic externality reasoning synthesized from the literature; empirical quantification of social cost is limited.
Generated code may incidentally reproduce copyrighted or licensed snippets from training data.
Analyses detecting verbatim or near-verbatim reproductions of licensed/copyrighted code in model outputs in selected tests and audits; evidence heterogeneous and depends on prompts and model/data.
Outputs often lack deep, project-level contextual reasoning (e.g., design tradeoffs, architecture constraints).
Qualitative failure-mode analyses, user studies, and benchmark tasks showing limitations in system-level reasoning and context-aware design decisions; evidence from short-horizon labs and case studies.
There is a risk of shallow learning if learners over-rely on AI outputs without understanding fundamentals.
Educational studies and observational analyses indicating reduced engagement with underlying concepts for some learners using AI assistance, plus qualitative reports from instructors; studies often short-term.