Evidence (5126 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Adoption
Remove filter
Upfront costs for AI adoption are substantial: development, clinical validation, regulatory compliance, EHR integration, and ongoing monitoring.
Implementation and regulatory literature synthesized in the review documenting typical cost categories and reported expenditures for clinical AI projects.
Large language models (LLMs) suffer from hallucinations (fabricated facts), overconfidence, and unpredictable failure modes in open-ended tasks.
Technical papers and benchmarks on LLM factuality, calibration, and failure modes summarized in the review; empirical evaluations showing instances of fabricated outputs and calibration issues.
Contemporary AI systems have no capacity for physical examination, sensorimotor procedures, or direct patient-contact diagnostics.
Technical limitations of CNNs and LLMs described in literature (lack of embodiment, no sensorimotor capabilities) and absence of credible empirical demonstrations of safe autonomous physical clinical procedures in reviewed studies.
Current models exhibit poor out-of-distribution (OOD) generalization: performance degrades when inputs differ from training distributions.
Technical literature and robustness/domain-shift research reviewed in the paper documenting declines in model accuracy under domain shift and dataset changes.
High upfront costs and lack of tailored financing instruments are significant financial constraints on SME AI adoption.
Case studies, finance sector reports, and SME surveys cited in the review showing cost barriers and financing gaps; evidence descriptive rather than causal.
Infrastructure deficits (unreliable power, inadequate broadband, limited local compute) materially constrain AI uptake by SMEs.
Policy reports and empirical studies in the literature documenting infrastructural limitations in LMIC contexts (including Botswana) that impede digital and AI deployment.
Skills shortages (AI literacy, data science, digital management) are a primary constraint on SME AI adoption in developing economies.
Consistent findings across surveys, interviews, and case studies in the reviewed literature highlighting skill gaps as a common barrier; authors note multiple empirical sources pointing to this constraint.
Heterogeneity in study designs and contexts within the literature limits direct comparability and generalizability of findings.
Limitation noted in the paper based on the authors' assessment of diversity across the 103 reviewed studies (varying methods, contexts, metrics).
Institutional inertia, fragmented governance structures, limited technical capacity, and weak data stewardship impede scale‑up of AI systems in the public sector.
Thematic synthesis of barriers reported across empirical studies and institutional reports within the systematic review (103 items).
Low‑ and middle‑income contexts face persistent gaps—infrastructure, data ecosystems, and talent retention—that slow AI adoption in public governance.
Consistent findings across multiple studies in the 103‑item corpus reporting infrastructure deficits, weak data ecosystems, and brain drain/retention issues in LMIC settings.
On-Premise RAG requires internal technical capabilities (MLOps, infrastructure engineers) to maintain and update the system.
Organizational evaluation and implementation discussion noting operational responsibilities and skill requirements for on-prem deployment.
On-Premise RAG incurs higher latency compared with cloud RAG.
Technology evaluations included measured system latency comparisons between architectures; exact latency values and statistical details not provided in summary.
On-Premise RAG requires upfront capital expenditure (hardware) and ongoing maintenance (operations, model updates, staff).
Organizational evaluations / cost accounting and implementation discussion indicating hardware, operations, and personnel requirements for on-prem deployment; specific cost figures not provided in summary.
The January 2026 DoD AI Strategy memorandum establishes a Barrier Removal Board that provides expanded authority to waive established governance controls.
Primary source analysis: close reading of the Department of Defense January 2026 AI Strategy memorandum and related policy text (policy language describing the Barrier Removal Board and its waiver authorities). No sample size required; based on document text.
Risks include bias and discrimination, opacity in decision-making, privacy and cybersecurity threats, liability gaps, and uneven distribution of benefits that can exacerbate inequality.
Compilation from academic and policy literature, regulatory gap analyses, and examples of problematic AI use cases identified in the report's sectoral review.
AI creates significant ethical, legal and distributional risks.
Review of policy documents, academic and policy literature, and documented examples of AI deployment across multiple sectors highlighting harms (bias, privacy breaches, liability gaps, unequal benefits).
Except for the EU, jurisdictions surveyed generally lack AI-specific energy-disclosure requirements.
Comparative analysis across eleven jurisdictions identifying presence/absence of AI-specific energy disclosure rules; EU singled out as having such requirements.
Regulatory regimes in the surveyed jurisdictions focus on training emissions more than on inference-phase energy consumption.
Regulatory mapping and lifecycle-phase analysis showing which phases (training vs inference) are covered by existing rules in the eleven jurisdictions.
Current environmental governance across the eleven jurisdictions mapped in the paper is predominantly facility-level (data-center focused) rather than model-level.
Regulatory mapping: comparative legal/policy analysis across eleven jurisdictions identifying locus of existing rules (facility vs model).
Reliance on imperfect data and model assumptions can produce biased or misleading forecasts; careful validation, transparency about assumptions, and governance are necessary.
Risks & governance discussion in the paper raising this limitation and recommending practices (qualitative argumentation).
Practical adoption challenges in African settings are substantial: limited digital infrastructure, sparse local computing capacity, weak regulatory frameworks for synthetic data use, and clinician skepticism about model validity.
Implementation and governance analyses, policy reports, and qualitative studies summarized in the review document infrastructural and regulatory barriers as well as clinician attitudes; evidence is interdisciplinary and largely descriptive, with varied geographic coverage and few large-scale empirical deployment studies.
Fidelity gaps in synthetic data (missing rare events, distributional shifts, artefacts) create risks of misclassification and biased outcomes when models are deployed in real-world African clinical settings.
Synthesis of machine-learning evaluations and clinical validation studies identified in the literature review that document instances of missing rare events, distributional mismatch, and data artefacts in synthetic datasets; these studies link such fidelity gaps to degraded performance and biased predictions in downstream models. The review highlights case examples but does not provide pooled quantitative estimates.
Significant financial and implementation barriers (infrastructure, staff, validation) risk worsening access inequities between well-resourced and low-resource providers.
Economic analyses, stakeholder surveys, and deployment trend reports synthesized in the paper showing higher upfront costs and validation burdens for adopters; no randomized trials.
Regulatory fragmentation and lack of harmonized standards increase compliance complexity for healthcare AI deployments.
Policy analyses, regulatory reviews, and industry reports synthesized in the paper describing divergent national/regional regulatory approaches and their operational consequences.
Both open-source and proprietary approaches carry risks of algorithmic bias and fairness violations, especially when models are uncontrolled or poorly validated across populations.
Multiple peer-reviewed studies and audit reports summarized in the literature synthesis documenting bias/fairness issues across model types and populations.
Rural digital divides and uneven infrastructure constrain the reach of AI health solutions and risk exacerbating health inequities unless explicitly addressed.
Synthesis of infrastructure and equity literature, national connectivity data referenced in reviewed documents, and policy analyses included in the review period 2020–2025.
Regulatory and governance frameworks for health AI in Indonesia are fragmented, with limited requirements for transparency/explainability and weak procurement/governance mechanisms.
Thematic analysis of national policy papers, SATUSEHAT governance reports, and regulatory documents identified in the 42 supplementary documents and literature review (2020–2025).
AI-generated code can introduce security vulnerabilities and raise licensing/intellectual-property concerns.
Case studies of security incidents, analyses of generated code provenance, and vulnerability-detection studies synthesized in the review.
LLMs sometimes generate incorrect, nonsensical, or insecure code (hallucinations).
Multiple benchmarks, code-generation accuracy tests, and incident case studies documented in the empirical literature showing incorrect or fabricated outputs.
Data security, privacy risks, unequal gains, and regulatory shortfalls can undermine the benefits of AI/robotics adoption.
Policy and risk analyses from secondary literature, case studies, and institutional reports synthesized in the paper; examples cited but no original incident-level dataset or incidence rates provided.
Transition frictions and skills mismatches are important barriers to workers moving into newly created AI‑related roles.
Qualitative review of workforce and skills literature, case studies, and sector reports; evidence comes from secondary sources with varied methodologies; the paper does not report pooled quantitative estimates.
International and national legal approaches to these stages are fragmented, creating uncertainty for IP, privacy, liability and evidence law.
Comparative review of international and national legal approaches and judicial responses cited in the paper (secondary legal sources).
Output-stage risks include authenticity/deception concerns, attribution and reuse-rights disputes, reputational harms, and broader societal impacts from abundant generated media.
Review of empirical studies on media authenticity, legal cases, and policy analyses included in the narrative review.
Process-stage risks include governance of model development, control over deployment, transparency, auditing, and operational safety.
Conceptual synthesis of technical governance literature and policy reports cited in the narrative review.
Input-stage risks include concerns about consent, copyright, representativeness, bias, provenance and data ownership for training material.
Synthesis of legal and policy literature and documented legal cases/statutes related to training data and IP/privacy issues (secondary sources only).
Generative audiovisual AI poses material ethical, control, transparency and legal challenges across three stages — input (training data), process (development & deployment), and output (use of artifacts).
Conceptual three-stage framework built from comparative review of literature, legal cases/statutes and policy reports described in the paper.
Limitations of the study include potential selection bias in reviewed sources and contingency of conclusions on evolving legal decisions and technology developments.
Author-stated limitations section within the paper; qualitative acknowledgement rather than empirical bias assessment.
Output-stage risks include challenges to authenticity and provenance, erosion of trust (deepfakes and misinformation), and potential legal liability for harms caused by generated content.
Synthesis of technical papers on deepfakes, legal analyses of liability, and policy reports referenced in the review; no original incident dataset or quantitative prevalence estimate included.
Input-stage risks include copyright infringement, lack of consent, poor data provenance, and biases/representational harms encoded in training datasets.
Review and synthesis of academic and legal literature on training data issues; examples and case law discussed, but no original dataset audit or sample counts provided.
Use of these models faces significant ethical, control, transparency, and legal challenges across three stages—input (training data), process (development/control), and output (generated artifacts).
Framework constructed from interdisciplinary literature (technical, ethical, legal sources) and review of statutes/judicial approaches; qualitative synthesis rather than primary data.
High environmental constraints in many African regions (poor infrastructure, challenging geography, frequent climate shocks) materially affect logistics, resilience, and supply-chain performance.
Review of literature on infrastructure, geography, and climate impacts in the conceptual paper.
Africa is abundant in natural resources but exhibits relatively low development/outcomes from those resources, creating resource allocation and value-capture problems relevant to OSCM.
Development economics and regional studies literature cited in the paper's synthesis; conceptual claim without new empirical testing.
Africa has a large informal economy and many informal organizations that shape supply-chain behavior and market functioning.
Literature synthesis citing development and institutional studies (no primary data collection in the paper).
Results reflect small-scale e-commerce use cases; external validity to larger firms, other sectors, or more complex tasks is not established.
Scope of deployments limited to small-scale e-commerce settings as stated in methods; no cross-sector or large-firm samples reported in summary.
The study's evidence is observational rather than randomized controlled trials, so causal estimates about productivity impacts are suggestive rather than definitive.
Declared study design: applied experimentation and observational analysis of deployments (no randomized assignment); methods section explicitly notes observational limitation.
High upfront costs, weak digital/physical infrastructure, limited access to credit, low digital literacy, insecure land tenure, and sociocultural factors (including gendered access) limit uptake of digital and precision technologies among smallholders.
Consistent findings across program evaluations, qualitative stakeholder interviews, participatory assessments, and case studies cited in the synthesis.
Limited access to capital, data, digital infrastructure, skills, and insecure land tenure reduce adoption rates for advanced innovations among smallholders.
Multiple empirical studies and program evaluations synthesized in the review documenting adoption barriers; policy review identifying structural constraints across regions.
Integrating AI raises questions of accountability, transparency, fairness, privacy, and bias; managerial responsibility includes governance design, validation, and audit of AI decisions.
Normative and governance-focused synthesis citing ethical frameworks and illustrative cases; identifies governance tasks and validation/audit needs rather than empirical prevalence rates.
Generated code can introduce security vulnerabilities.
Security analyses and code audits documenting examples where LLM-generated code contains known vulnerability patterns; incident-oriented case studies and controlled experiments assessing vulnerability incidence.
LLMs can produce plausible-looking but incorrect or insecure code (so-called 'hallucinations').
Benchmarks and controlled tests demonstrating incorrect outputs; security analyses and replicated examples showing erroneous or insecure snippets produced by LLMs across multiple models and prompts.