Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
In Africa, AI is reshaping privacy debates: concerns about data sovereignty, cross-border flows, surveillance, and the need to tailor governance to local social, legal and economic conditions.
Comparative analysis of national laws, draft regulations, regional instruments, and policy discussions from a growing set of African policy responses presented in the report.
Regulatory uncertainty and reputational risks from rights violations can distort investment and innovation incentives—either dampening responsible investment or encouraging regulatory arbitrage by firms favoring lax regimes.
Policy-document discourse analysis and theoretical argument about firm behavior under regulatory uncertainty; no firm-level investment data included.
National and industry narratives frame AI primarily as an engine of economic growth (aligned with the Golden Indonesia 2045 vision), a framing that can obscure structural risks such as algorithmic bias, surveillance, and data exploitation.
Discourse analysis of policy documents and industry statements showing recurrent growth-focused rhetoric linked to national development goals (Golden Indonesia 2045); theoretical interpretation that this framing sidelines risk discourse.
Synthetic data can reduce costs and logistical barriers of collecting large clinical datasets, lowering data-acquisition and privacy-compliance expenses, but high-fidelity synthetic generation and validation require upfront investment in modelling expertise and compute.
Economic and technical analyses synthesized from the reviewed literature and policy reports; assertions are based on cost components commonly discussed (data collection vs. modelling/compute) but the paper notes limited empirical economic evaluations in the literature.
Economic outcomes of healthcare AI depend critically on governance design: policies and technical architectures (e.g., federated learning, certification standards, tiered risk management) will determine whether mixed open/proprietary ecosystems yield broad welfare gains or entrench inequities and concentrated market power.
High-level economic reasoning and synthesis of empirical and theoretical literature on governance, market structure, and technology adoption; prescriptive conclusion based on aggregated evidence rather than causal testing within the paper.
Reliable, well-integrated AI may raise clinical productivity and shift labor toward higher-value tasks, but misaligned deployments risk increased administrative burden (e.g., appeals, oversight).
Mixed evidence from pilot studies, observational reports, and stakeholder feedback synthesized in the paper; heterogeneity across settings and limited long-term outcome data noted.
Proprietary models concentrate costs into vendor payments and can potentially lower internal operational burden for providers.
Industry reports and economic synthesis comparing vendor-managed proprietary offerings with self-managed alternatives; based on reported vendor pricing models and operational roles.
Open-source lowers licensing fees but can shift costs toward in-house engineering, governance, and validation.
Cost-structure analyses and industry reports aggregated in the synthesis comparing licensing vs. internal operational costs across deployment models.
Open-source models show narrow but growing parity with proprietary models on some diagnostic tasks.
Synthesis of peer-reviewed comparative studies and benchmark reports indicating comparable diagnostic accuracy in limited tasks; authors note heterogeneity across studies and lack of long-term clinical trials.
Implementing strong transparency, explainability, and safety requirements increases initial compliance costs but builds trust and improves long-run adoption, avoiding costly recalls or litigation.
Regulatory economics argument supported by international precedents and literature cited in the review (comparisons to EU AI Act principles and other jurisdictions); this is a forward-looking policy-economic claim rather than a measured empirical result in Indonesia.
Firms can realize productivity gains from adopting LLMs, but net value depends on verification, security remediation, and IP-management costs.
Firm-level case studies and productivity measurements in the literature showing time savings but also nontrivial verification/remediation effort; synthesis emphasizes net effect conditional on costs.
Automation displaces some routine jobs but creates demand for roles in programming, data science, system maintenance, and higher‑order cognitive tasks.
Synthesis of labor‑market literature and sectoral case studies summarized in the review; relies on secondary empirical studies rather than new microdata analysis; sample sizes and study designs vary by referenced work.
Potential policy levers include mandatory provenance metadata, liability rules, taxes/subsidies to internalize harms, antitrust actions to limit concentration, and funding for public verification tools; each policy choice will shape incentives, innovation rates and market outcomes.
Policy options and scenario analysis summarized from legal/policy literature; presented as hypothetical levers rather than empirically tested interventions.
Economic returns may shift toward owners of data, model capacity and verification technology, while traditional creators may demand new compensation mechanisms (e.g., data-use royalties, collective licensing).
Conceptual economic analysis and synthesis of stakeholder- and rights-based literature in the narrative review.
Abundant synthetic media may erode the signaling value of standard digital content and create demand for authentication services, certification markets and premium 'human-made' labels.
Conceptual analysis grounded in signaling and market-for-authenticity literature reviewed in the paper (no primary WTP studies included).
Large productivity gains in content production could reduce marginal costs and compress prices for many creative goods, potentially displacing some human labor while raising demand for high-skill oversight, curation and novel creative inputs.
Economic reasoning and literature review on automation/productivity effects; no new empirical estimates presented (narrative inference).
Social acceptance is uncertain: some studies find people may rate AI-generated content equal or superior to human-created content, while proliferation of artificial media could also spur distrust or rejection of digital media.
Cited empirical studies on content perception and trust summarized in the narrative review (no primary data; exact sample sizes and studies vary by citation).
If consumers prefer AI-generated content, demand shifts could lower prices and increase consumption volume for certain media types; alternatively, trust erosion could reduce overall demand for digital content.
Reference to empirical studies with mixed results (paper notes 'some studies show higher ratings for AI content') and economic scenario modeling in the discussion; the paper does not report sample sizes or meta-analytic statistics.
Ambiguities in copyright and dataset licensing will affect value capture (original creators versus model operators) and may create new rent opportunities from provenance/authentication services or certified 'human-made' labels.
Legal and economic literature synthesized in the review, plus policy discussion; no empirical royalty or rent-share data provided.
Generative audiovisual models pose displacement risk for creative and production roles, but also create demand for new skills (prompt engineering, curation, verification) and complementarities in oversight and post-production.
Economic argumentation and citations to labor-impact literature and case examples in the review; no original labor-market empirical study or sample statistics provided.
Rapid population growth and large informal labor pools in Africa provide settings to study long-run labor reallocation under AI adoption, wage dynamics, and skill-biased technological change where formal schooling is limited.
Theoretical argument drawing on demographic and labor-economics literature as presented in the paper.
Socio-cultural diversity and data sparsity in Africa create challenges and opportunities for fairness-aware machine learning and external validity testing of AI economic models across population subgroups.
Argumentative synthesis connecting diversity/data limitations with ML fairness literature.
Managing factor market rivalry (competition for labor, land, and capital amid informality) is an OSCM-relevant phenomenon that African contexts can illuminate.
Synthesis of labor and land market literature within the paper's conceptual framework.
Africa’s population growth potential and demographic dynamics are important contextual factors for OSCM research and long-run labor market outcomes.
Summarized demographic literature within the conceptual review (no primary demographic data analysis).
Traditional and survival-oriented cultures in parts of Africa influence firm and household decision-making relevant to OSCM.
Theoretical synthesis and references to regional social-science literature (no primary data).
Socio-cultural diversity and complexity across African contexts significantly affect OSCM phenomena (e.g., demand heterogeneity, governance norms).
Conceptual review of cross-disciplinary literature; no new empirical analysis.
Africa’s distinctive contextual features (large informal economy, socio-cultural diversity, weak formal institutions, abundant but underutilized resources, and high environmental constraints) create unique operations and supply chain management (OSCM) phenomena that both challenge existing OSCM theory and offer fertile ground for novel theoretical contributions.
Conceptual synthesis and literature review across OSCM, development studies, institutional economics, and regional studies; no primary empirical data collected in this paper.
Adoption outcomes are shaped not only by technology and costs but also by customer perceptions, worker acceptance, and managerial actions; thus stakeholder-centered strategies are needed for successful deployment.
Synthesis of theoretical results from the evolutionary game (Essay 2) and the differentiated competition framework (Essay 1), supported by simulation experiments highlighting the role of perceptions and incentives. This is an interpreted conclusion rather than a direct empirical finding.
Adoption likelihood is sensitive to initial conditions and to parameters such as employee sensitivity to robots, training costs, perceived risks, marketing influence, and labor efficiency.
Sensitivity analysis within the MATLAB simulations varying parameters (training costs, perceived risk, marketing strength, labor efficiency) and initial states; evolutionary game theoretical structure showing path dependence.
Stakeholder attitudes toward AI service robots evolve strategically; widespread positive adoption requires favorable initial conditions and appropriate incentives (e.g., lower training costs, higher labor efficiency, effective marketing).
Analytical framework: three-player evolutionary game theory modeling hotel owners, employees, and customers; computational evidence from MATLAB simulations and sensitivity analysis that vary initial states and parameters (training costs, perceived risks, marketing strength, labor efficiency) to map dynamic trajectories and basins of attraction.
AI adoption has a U-shaped effect on hospitality firm profit: short-term costs and adjustment can reduce profits, while longer-term gains from differentiation and productivity raise profits.
Combined theoretical analysis (differentiated Bertrand competition model incorporating demand-side differentiation and productivity mechanisms) and an empirical firm-level analysis reported in the dissertation that links AI adoption measures to profit, demand, and productivity indicators. (Sample size and specific datasets not reported in the provided summary.)
Adoption frictions—integration costs, data access, reliability, and regulatory compliance—may slow diffusion of AI agents and create heterogeneity in economic value across firms and sectors.
Theoretical implication supported by observed orchestration and governance challenges in deployments; recommendation/interpretation rather than direct causal measurement.
Implementation heterogeneity (how guardrails, human oversight, and orchestration are configured) likely drives outcome variation across deployments.
Observed heterogeneity in Alfred AI deployments and stated limitation that configuration differences affect outcomes; based on deployment comparisons and qualitative analysis (sample size/configurations unspecified).
Net productivity gains may be smaller once indirect costs—governance, monitoring, error-correction, orchestration—are accounted for; standard productivity accounting should include these costs.
Conceptual argument supported by observational documentation of governance and monitoring burdens in deployments; no precise cost accounting reported in summary.
Autonomous agents are likely to substitute for routine, structured cognitive tasks while complementing higher-level managerial and strategic tasks, accelerating task reallocation within firms.
Synthesis of prior literature (generative AI productivity findings) and observational deployment patterns from Alfred AI indicating substitution of routine tasks and continued human involvement in oversight/strategy.
Realized productivity gains from AI agents are materially constrained by governance complexity, model reliability limits (errors, hallucinations, edge cases), orchestration challenges across tools/data/human teams, and continued need for human-in-the-loop oversight.
Qualitative operational impacts and deployment observations from Alfred AI implementations, documented frictions in policies, safety constraints, error handling, and orchestration; evidence drawn from observational deployments and operational logs.
AI‑enabled risk assessment (weather, pests, price forecasts) can improve index insurance and credit scoring for smallholders, lowering financing costs and increasing investment — but it also raises concerns about data bias and exclusion.
Pilot programs and modeling studies on index insurance and credit scoring, combined with policy analyses documenting equity and bias risks; primary empirical work is limited to pilots and simulations.
Returns to AI investments depend on complementary investments in farmer knowledge, extension services, and local institutions; AI tends to amplify returns to managerial skills and digital literacy.
Empirical studies and randomized/quasi‑experimental trials showing complementarity effects, and qualitative evidence from stakeholder interviews; cited studies report larger impacts where complementary services exist.
Impacts of technology‑ecology integration are heterogeneous: they vary by farm size, crop type, local infrastructure, and farmer skills; smallholders can benefit substantially but are more constrained by liquidity, information, and market access.
Observational econometric analyses and randomized/quasi‑experimental studies reporting heterogeneous treatment effects, supplemented by qualitative interviews and case studies documenting constraints faced by smallholders.
Data governance, platform market structure, and inclusive policy design determine whether gains from AI/IoT are widely shared or captured by large firms.
Policy review, conceptual analysis, and case studies of platform markets that document capture risks and distributional outcomes linked to data ownership and market concentration.
Innovations can reduce emissions and resource use per unit of output but risk lock‑in to input‑heavy models unless ecological principles and monitoring are integrated.
Case study and pilot evidence showing reduced input intensity or emissions intensity in some interventions; conceptual discussion and examples highlighting trade‑offs and potential for input‑intensive lock‑in absent ecological safeguards.
Labor-market consequences will involve reallocation effects: routine-task automation, rising returns to managerial and technical skills, and potential within-firm wage dispersion.
Synthesis of labor economics theory and prior empirical work on automation; book recommends matched employer-employee panel studies to trace these effects but does not report such new panel results.
AI’s effects vary by industry, task composition, and firm capabilities; high-data, standardized-task sectors see faster, deeper impacts.
Cross-sector examples and theoretical arguments about task routineness and data intensity; calls for heterogeneity-aware empirical designs (e.g., difference-in-differences with staggered adoption).
Automation of routine tasks raises demand for cognitive, interpersonal, and technical skills; firms face reskilling needs and changing task allocation between humans and machines.
Task-level analytic framework and literature review on automation effects; book recommends empirical approaches (e.g., occupation and job-task data) to quantify these changes but does not present a single large empirical estimate.
Managers shift from routine decision execution to tasks involving oversight, interpretation, strategic design, and ethical stewardship of AI systems.
Qualitative case studies and literature review of task-level research; suggested task-analytic methods rather than reporting a specific empirical task dataset.
AI complements some researcher tasks (idea generation, analysis, writing) and substitutes others (routine editing, literature searches), changing skill demand and training priorities.
Stated under Labor Market Effects. Supported conceptually and likely by task-level studies or surveys; abstract doesn't cite specific empirical evidence or measurement details.
Impacts of AI adoption are broad, affecting individual researcher productivity, team workflows, and institutional outcomes in scholarly communication and digital scholarship.
Key Points summary. Basis likely includes mixed-methods evidence (surveys/interviews at individual and team levels, case studies, platform usage data) synthesized in the paper; abstract lacks detail on scope and samples.
Routine, boilerplate, and debugging tasks are most automatable or complemented by LLMs, shifting value toward design, verification, and systems thinking.
Task-level analyses, observational studies, and synthesized findings showing larger gains on repetitive or templated tasks versus high-level design tasks.
Liability and intellectual-property ownership around AI-assisted code are unresolved practical and legal concerns.
Legal and policy analyses, practitioner reports, and qualitative interviews noting ambiguous legal frameworks and unresolved questions about ownership and liability for AI-assisted code.
A robust empirical pattern in the literature is that AI’s effects vary by skill level: displacement risk is concentrated among lower-skilled tasks while augmentation and wage gains are more likely for higher-skilled tasks.
Empirical findings and syntheses cited (Brynjolfsson et al., 2023; Chen et al., 2024) that report task- and skill-differentiated effects on employment and wages; evidence comprises cross-sectional exposure analyses and panel studies in the cited literature.