Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Interoperability and ethical‑by‑design requirements influence vendor lock‑in, competition, and the emergence of platform providers in markets for public‑sector AI solutions.
Policy and market analyses within the reviewed literature that link technical standards and ethical design requirements to market structure and vendor dynamics (synthesized from the 103 items).
Predictive analytics and AI enable anticipatory policy design (early intervention, forecasting), but they raise normative and governance questions about acceptable levels of prediction‑driven intervention.
Thematic findings from the review's mapping of predictive analytics use cases and accompanying ethical/governance discussions across the 103‑item corpus.
Human–AI interaction issues—such as automation bias and shifting public servant roles—affect decision quality and legitimacy, creating a need for human‑in‑the‑loop processes.
Multiple empirical and theoretical contributions in the reviewed literature identified automation bias and role shifts; recommendation for human‑in‑the‑loop emerges from synthesis of these studies.
Legal frameworks like the EU GDPR provide a useful normative benchmark, but their protections do not automatically translate across jurisdictions; cross‑border research encounters gaps and asymmetries in enforcement and rights.
Normative and legal analysis contrasting GDPR principles with the Chilean/regional regulatory context and observed cross‑border data flow practices in the case study.
State-level divergence in AI-related regulation will create geographic heterogeneity in adoption costs and labor protections, potentially inducing firm and worker sorting across states and making national inference about AI’s effects more difficult.
Comparative policy review across states described in the commentary; inferential claim without presented empirical migration or firm-location data.
Regulatory uncertainty (rollbacks and a patchwork of rules) can raise compliance and political risk costs, causing some firms to accelerate private governance and self-regulation while causing others to delay investment or relocate activities.
Theoretical and policy reasoning based on review of regulatory signals and firm behavior literature; no empirical firm-level study or sample provided in the commentary.
Regulatory volatility and fragmentation will shape firms’ AI investment decisions, firms’ workplace practices (surveillance, task allocation), and the distributional consequences of AI for wages, employment and bargaining power.
Analytic synthesis linking observed policy instability and jurisdictional patchwork to likely firm responses and labor-market outcomes; conceptual inference rather than causal empirical evidence.
Standards, certification, and accountability mechanisms reduce information asymmetries and can unlock markets for 'trustworthy' AI, but they impose compliance costs that may slow diffusion—especially for smaller firms and low-income countries.
Economic and policy analysis discussing trade-offs between market signals and regulatory compliance burdens; synthesis of observed and potential impacts across jurisdictions.
In healthcare, AI can improve diagnostics and reduce costs, but liability rules, data-sharing frameworks, and equity of access will determine welfare outcomes.
Healthcare case studies, literature on medical AI deployments, and policy analysis of legal/regulatory determinants; no large-scale empirical welfare estimates in the report.
In financial services, algorithmic credit scoring and automated trading can improve access and efficiency but also concentrate risk and create systemic vulnerabilities.
Sectoral case studies and literature reviewed in the report; regulatory discussion recommending balance between innovation (e.g., sandboxes) and prudential safeguards.
Privacy rules and data localization can alter data market frictions, raise compliance costs, and affect cross-border services and trade.
Comparative policy analysis of privacy and data localization proposals and economic reasoning about trade and compliance costs; no primary trade-impact quantification provided.
Automation risks vary by task and sector; policies should prioritize reskilling, lifelong learning, and sectoral training programs to mitigate displacement and capture productivity gains.
Literature review and sectoral case studies highlighting heterogeneous automation exposure by task and sector; policy analysis recommending workforce interventions.
In Africa, AI is reshaping privacy debates: concerns about data sovereignty, cross-border flows, surveillance, and the need to tailor governance to local social, legal and economic conditions.
Comparative analysis of national laws, draft regulations, regional instruments, and policy discussions from a growing set of African policy responses presented in the report.
Regulatory uncertainty and reputational risks from rights violations can distort investment and innovation incentives—either dampening responsible investment or encouraging regulatory arbitrage by firms favoring lax regimes.
Policy-document discourse analysis and theoretical argument about firm behavior under regulatory uncertainty; no firm-level investment data included.
National and industry narratives frame AI primarily as an engine of economic growth (aligned with the Golden Indonesia 2045 vision), a framing that can obscure structural risks such as algorithmic bias, surveillance, and data exploitation.
Discourse analysis of policy documents and industry statements showing recurrent growth-focused rhetoric linked to national development goals (Golden Indonesia 2045); theoretical interpretation that this framing sidelines risk discourse.
Synthetic data can reduce costs and logistical barriers of collecting large clinical datasets, lowering data-acquisition and privacy-compliance expenses, but high-fidelity synthetic generation and validation require upfront investment in modelling expertise and compute.
Economic and technical analyses synthesized from the reviewed literature and policy reports; assertions are based on cost components commonly discussed (data collection vs. modelling/compute) but the paper notes limited empirical economic evaluations in the literature.
Economic outcomes of healthcare AI depend critically on governance design: policies and technical architectures (e.g., federated learning, certification standards, tiered risk management) will determine whether mixed open/proprietary ecosystems yield broad welfare gains or entrench inequities and concentrated market power.
High-level economic reasoning and synthesis of empirical and theoretical literature on governance, market structure, and technology adoption; prescriptive conclusion based on aggregated evidence rather than causal testing within the paper.
Reliable, well-integrated AI may raise clinical productivity and shift labor toward higher-value tasks, but misaligned deployments risk increased administrative burden (e.g., appeals, oversight).
Mixed evidence from pilot studies, observational reports, and stakeholder feedback synthesized in the paper; heterogeneity across settings and limited long-term outcome data noted.
Proprietary models concentrate costs into vendor payments and can potentially lower internal operational burden for providers.
Industry reports and economic synthesis comparing vendor-managed proprietary offerings with self-managed alternatives; based on reported vendor pricing models and operational roles.
Open-source lowers licensing fees but can shift costs toward in-house engineering, governance, and validation.
Cost-structure analyses and industry reports aggregated in the synthesis comparing licensing vs. internal operational costs across deployment models.
Open-source models show narrow but growing parity with proprietary models on some diagnostic tasks.
Synthesis of peer-reviewed comparative studies and benchmark reports indicating comparable diagnostic accuracy in limited tasks; authors note heterogeneity across studies and lack of long-term clinical trials.
Implementing strong transparency, explainability, and safety requirements increases initial compliance costs but builds trust and improves long-run adoption, avoiding costly recalls or litigation.
Regulatory economics argument supported by international precedents and literature cited in the review (comparisons to EU AI Act principles and other jurisdictions); this is a forward-looking policy-economic claim rather than a measured empirical result in Indonesia.
Firms can realize productivity gains from adopting LLMs, but net value depends on verification, security remediation, and IP-management costs.
Firm-level case studies and productivity measurements in the literature showing time savings but also nontrivial verification/remediation effort; synthesis emphasizes net effect conditional on costs.
Automation displaces some routine jobs but creates demand for roles in programming, data science, system maintenance, and higher‑order cognitive tasks.
Synthesis of labor‑market literature and sectoral case studies summarized in the review; relies on secondary empirical studies rather than new microdata analysis; sample sizes and study designs vary by referenced work.
Potential policy levers include mandatory provenance metadata, liability rules, taxes/subsidies to internalize harms, antitrust actions to limit concentration, and funding for public verification tools; each policy choice will shape incentives, innovation rates and market outcomes.
Policy options and scenario analysis summarized from legal/policy literature; presented as hypothetical levers rather than empirically tested interventions.
Economic returns may shift toward owners of data, model capacity and verification technology, while traditional creators may demand new compensation mechanisms (e.g., data-use royalties, collective licensing).
Conceptual economic analysis and synthesis of stakeholder- and rights-based literature in the narrative review.
Abundant synthetic media may erode the signaling value of standard digital content and create demand for authentication services, certification markets and premium 'human-made' labels.
Conceptual analysis grounded in signaling and market-for-authenticity literature reviewed in the paper (no primary WTP studies included).
Large productivity gains in content production could reduce marginal costs and compress prices for many creative goods, potentially displacing some human labor while raising demand for high-skill oversight, curation and novel creative inputs.
Economic reasoning and literature review on automation/productivity effects; no new empirical estimates presented (narrative inference).
Social acceptance is uncertain: some studies find people may rate AI-generated content equal or superior to human-created content, while proliferation of artificial media could also spur distrust or rejection of digital media.
Cited empirical studies on content perception and trust summarized in the narrative review (no primary data; exact sample sizes and studies vary by citation).
If consumers prefer AI-generated content, demand shifts could lower prices and increase consumption volume for certain media types; alternatively, trust erosion could reduce overall demand for digital content.
Reference to empirical studies with mixed results (paper notes 'some studies show higher ratings for AI content') and economic scenario modeling in the discussion; the paper does not report sample sizes or meta-analytic statistics.
Ambiguities in copyright and dataset licensing will affect value capture (original creators versus model operators) and may create new rent opportunities from provenance/authentication services or certified 'human-made' labels.
Legal and economic literature synthesized in the review, plus policy discussion; no empirical royalty or rent-share data provided.
Generative audiovisual models pose displacement risk for creative and production roles, but also create demand for new skills (prompt engineering, curation, verification) and complementarities in oversight and post-production.
Economic argumentation and citations to labor-impact literature and case examples in the review; no original labor-market empirical study or sample statistics provided.
Rapid population growth and large informal labor pools in Africa provide settings to study long-run labor reallocation under AI adoption, wage dynamics, and skill-biased technological change where formal schooling is limited.
Theoretical argument drawing on demographic and labor-economics literature as presented in the paper.
Socio-cultural diversity and data sparsity in Africa create challenges and opportunities for fairness-aware machine learning and external validity testing of AI economic models across population subgroups.
Argumentative synthesis connecting diversity/data limitations with ML fairness literature.
Managing factor market rivalry (competition for labor, land, and capital amid informality) is an OSCM-relevant phenomenon that African contexts can illuminate.
Synthesis of labor and land market literature within the paper's conceptual framework.
Africa’s population growth potential and demographic dynamics are important contextual factors for OSCM research and long-run labor market outcomes.
Summarized demographic literature within the conceptual review (no primary demographic data analysis).
Traditional and survival-oriented cultures in parts of Africa influence firm and household decision-making relevant to OSCM.
Theoretical synthesis and references to regional social-science literature (no primary data).
Socio-cultural diversity and complexity across African contexts significantly affect OSCM phenomena (e.g., demand heterogeneity, governance norms).
Conceptual review of cross-disciplinary literature; no new empirical analysis.
Africa’s distinctive contextual features (large informal economy, socio-cultural diversity, weak formal institutions, abundant but underutilized resources, and high environmental constraints) create unique operations and supply chain management (OSCM) phenomena that both challenge existing OSCM theory and offer fertile ground for novel theoretical contributions.
Conceptual synthesis and literature review across OSCM, development studies, institutional economics, and regional studies; no primary empirical data collected in this paper.
Effectiveness and safety of AI agents require structured guardrails and human-in-the-loop designs; AI agents function as scalable cognitive infrastructure only conditional on such governance.
Synthesis of deployment experience and analysis of constraints; recommendation grounded in observed model reliability issues, governance complexity, and oversight needs from the Alfred AI experiments.
Deployment of AI agents shifts demand toward roles focused on oversight, orchestration, prompt/agent engineering, and governance, creating new types of labor that may offset some direct labor reductions.
Authors' inference based on observed need for human oversight and orchestration in deployments; not quantitatively measured in the study (no headcount or labor-share data reported).
AI‑enabled risk assessment (weather, pests, price forecasts) can improve index insurance and credit scoring for smallholders, lowering financing costs and increasing investment — but it also raises concerns about data bias and exclusion.
Pilot programs and modeling studies on index insurance and credit scoring, combined with policy analyses documenting equity and bias risks; primary empirical work is limited to pilots and simulations.
Returns to AI investments depend on complementary investments in farmer knowledge, extension services, and local institutions; AI tends to amplify returns to managerial skills and digital literacy.
Empirical studies and randomized/quasi‑experimental trials showing complementarity effects, and qualitative evidence from stakeholder interviews; cited studies report larger impacts where complementary services exist.
Impacts of technology‑ecology integration are heterogeneous: they vary by farm size, crop type, local infrastructure, and farmer skills; smallholders can benefit substantially but are more constrained by liquidity, information, and market access.
Observational econometric analyses and randomized/quasi‑experimental studies reporting heterogeneous treatment effects, supplemented by qualitative interviews and case studies documenting constraints faced by smallholders.
Data governance, platform market structure, and inclusive policy design determine whether gains from AI/IoT are widely shared or captured by large firms.
Policy review, conceptual analysis, and case studies of platform markets that document capture risks and distributional outcomes linked to data ownership and market concentration.
Innovations can reduce emissions and resource use per unit of output but risk lock‑in to input‑heavy models unless ecological principles and monitoring are integrated.
Case study and pilot evidence showing reduced input intensity or emissions intensity in some interventions; conceptual discussion and examples highlighting trade‑offs and potential for input‑intensive lock‑in absent ecological safeguards.
Labor-market consequences will involve reallocation effects: routine-task automation, rising returns to managerial and technical skills, and potential within-firm wage dispersion.
Synthesis of labor economics theory and prior empirical work on automation; book recommends matched employer-employee panel studies to trace these effects but does not report such new panel results.
AI’s effects vary by industry, task composition, and firm capabilities; high-data, standardized-task sectors see faster, deeper impacts.
Cross-sector examples and theoretical arguments about task routineness and data intensity; calls for heterogeneity-aware empirical designs (e.g., difference-in-differences with staggered adoption).
Automation of routine tasks raises demand for cognitive, interpersonal, and technical skills; firms face reskilling needs and changing task allocation between humans and machines.
Task-level analytic framework and literature review on automation effects; book recommends empirical approaches (e.g., occupation and job-task data) to quantify these changes but does not present a single large empirical estimate.
Managers shift from routine decision execution to tasks involving oversight, interpretation, strategic design, and ethical stewardship of AI systems.
Qualitative case studies and literature review of task-level research; suggested task-analytic methods rather than reporting a specific empirical task dataset.