Evidence (1920 claims)
Adoption
5227 claims
Productivity
4503 claims
Governance
4100 claims
Human-AI Collaboration
3062 claims
Labor Markets
2480 claims
Innovation
2320 claims
Org Design
2305 claims
Skills & Training
1920 claims
Inequality
1311 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 373 | 105 | 59 | 439 | 984 |
| Governance & Regulation | 366 | 172 | 115 | 55 | 718 |
| Research Productivity | 237 | 95 | 34 | 294 | 664 |
| Organizational Efficiency | 364 | 82 | 62 | 34 | 545 |
| Technology Adoption Rate | 293 | 118 | 66 | 30 | 511 |
| Firm Productivity | 274 | 33 | 68 | 10 | 390 |
| AI Safety & Ethics | 117 | 178 | 44 | 24 | 365 |
| Output Quality | 231 | 61 | 23 | 25 | 340 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 158 | 68 | 33 | 17 | 279 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 88 | 31 | 38 | 9 | 166 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 105 | 12 | 21 | 11 | 150 |
| Consumer Welfare | 68 | 29 | 35 | 7 | 139 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 71 | 10 | 29 | 6 | 116 |
| Worker Satisfaction | 46 | 38 | 12 | 9 | 105 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 11 | 16 | 94 |
| Task Completion Time | 76 | 5 | 4 | 2 | 87 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 16 | 9 | 5 | 48 |
| Job Displacement | 5 | 29 | 12 | — | 46 |
| Social Protection | 19 | 8 | 6 | 1 | 34 |
| Developer Productivity | 27 | 2 | 3 | 1 | 33 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 8 | 4 | 9 | — | 21 |
Skills Training
Remove filter
ISP automation shifts labor demand toward higher-skill roles (data governance, analytics, cross-functional coordination) and reduces demand for routine forecasting and manual reconciliation tasks.
Interview reports and authors' task-based inference across cases, supplemented by economic reasoning about task reallocation.
ISP is relevant across multiple sectors (FMCG, manufacturing, retail) but outcomes and capabilities are heterogeneous by firm size and legacy IT footprint.
Sample composition includes firms from FMCG, manufacturing, and retail; authors report cross-case heterogeneity linked to firm characteristics and IT legacy.
Technology alone is insufficient; successful ISP requires cross-functional collaboration and continuous process improvement to realize gains from digital integration.
Cross-case interview evidence showing cases where digital tools did not produce expected benefits until processes and collaboration were changed; authors' synthesis of recurring barriers and enablers across the five cases.
Integrated Supply Planning (ISP) improves resilience and competitive performance only when advanced technologies (notably AI-enabled forecasting and ERP integration) are combined with organizational alignment, leadership commitment, and a data-driven culture.
Qualitative multi-case study (n = 5 medium-to-large organizations across FMCG, manufacturing, retail); cross-case comparison of semi-structured interviews with supply chain professionals reporting instances where technology adoption produced gains only alongside organizational enablers.
Standardized explainability requirements (audits, disclosure mandates) will affect market entry, favor incumbents with resources to meet standards, and create demand for third-party auditors and certification services.
Policy- and regulatory-focused literature synthesized in the review; claims are deductive implications from governance proposals and descriptive accounts rather than empirical causal tests.
Implementing explainability increases upfront development costs (tooling, documentation, UIs, training) and ongoing compliance/monitoring costs, but can lower downstream costs from litigation, audits, and reputational harm.
Synthesis of economic and policy literature in the review describing cost components and trade-offs; statements are conceptual and based on reviewed case studies and analyses rather than primary cost accounting.
Firm returns to AI adoption depend crucially on sociotechnical investments (training, redesign, knowledge infrastructure), so AI price/performance alone is an incomplete predictor of adoption returns.
Conceptual claim grounded in organizational literature synthesized in the paper; no firm-level econometric evidence presented within the paper itself.
Economic models of AI impact should move beyond simple task-automation/substitution frameworks to incorporate team-level complementarities and cognitive-process primitives (reasoning, memory, attention).
Theoretical recommendation for economists based on the paper's framework; supported by conceptual arguments rather than empirical re-specification or estimation shown in the paper.
Sociotechnical determinants — team composition, trust calibration, shared mental models, training regimes, and task structure — materially shape Human–AI team effectiveness beyond algorithmic performance alone.
Integrative review of multiple literatures (organizational behavior, human–computer interaction, psychology); presented as conceptual determinants; no empirical quantification provided in the paper.
Task reallocation: demand will fall for routine, automatable tasks and rise for complementary, cognitive, and governance tasks.
Task‑level decomposition and theoretical arguments about comparative advantage between AI and humans; no quantitative labor market estimates.
Overall, AI will be augmentative: many roles will transform rather than disappear; transition costs and task reallocation are the primary labor‑market challenges.
Synthesis of task‑based automation/complementarity analysis and scenario reasoning; paper explicitly notes lack of large‑sample causal evidence.
Within the next five years, AI will become an embedded, augmentative co‑pilot across software development and adjacent tech professions, shifting daily work from manual, task‑level activities to higher‑order, idea‑driven collaboration with intelligent systems.
Conceptual, forward‑looking analysis synthesizing current AI capability trends, illustrative examples of existing AI assistants, and scenario reasoning; no empirical longitudinal data or sample size reported.
Macroeconomic and structural conditions (domestic savings, labor supply, infrastructure, human capital) shape countries' absorptive capacity for FDI benefits.
Theoretical synthesis and cross‑study empirical patterns cited in the review showing that structural conditions mediate the translation of FDI into local benefits; underlying studies vary in design and scope.
Skills formation occurs through on‑the‑job training and formal training investments associated with FDI, but training opportunities are often skewed toward higher‑skill workers.
Firm-level and micro studies synthesized in the review documenting training by foreign firms alongside evidence that benefits are concentrated among more skilled employees; precise magnitudes vary by study.
Overall interpretation: AI acts as skill‑biased and task‑displacing technological change — complementing higher‑order cognitive and interpersonal skills while substituting many routine cognitive tasks.
Synthesis of empirical findings: negative effects on routine cognitive employment, positive effects on complex/interpersonal employment, and differential wage impacts across income quintiles from IV estimates on the 38-country panel.
Countries with strong active labor market policies (ALMPs) and portable benefits experienced smaller employment shocks and faster workforce reallocation following AI adoption.
Heterogeneity/interaction analyses in the 38-country panel interacting AI Adoption Index with country-level measures of ALMP strength and portable benefits; reported materially smoother transitions in these countries.
AI adoption increases wage dispersion and has distributional consequences, raising top‑end wages while compressing or reducing middle‑income outcomes.
Observed differential wage effects across income quintiles (top +3.8%, middle −1.4%) from IV estimates on 38 OECD countries; interpretation drawn from quintile-specific wage results.
Short-run accounting and measurement approaches may miss long-run gains from improved decision quality or fraud reduction attributable to digital/AI systems.
Conceptual discussion and selected longitudinal case examples in the literature; the review highlights measurement horizons as a methodological limitation.
AI is capital–skill complementary in the public sector: returns to AI investments depend critically on workforce capabilities and managerial practices.
Theoretical arguments and some empirical/case evidence cited in the review indicating complementarities between technology and skills/management; systematic quantification across contexts is limited.
In practice these productivity gains are frequently muted or uneven across contexts.
Across reviewed literature, multiple case studies and evaluations report mixed or limited net productivity improvements; review notes heterogeneity by country, sector, and maturity of implementation. No pooled causal estimates available.
AI adoption shifts demand toward higher-skill tasks and complementary human capital, creating short-term displacement risks but opportunities for upskilling and higher-value employment if policies and training align.
Labor-economics literature, theoretical models, and some empirical examples synthesized in the review; robust, long-run causal evidence in LMIC SME settings is limited.
If AI diffusion is broad and SMEs possess absorptive capacity, AI can contribute to firm-level productivity improvements and sectoral diversification, potentially supporting aggregate growth; without capacity building, gains may concentrate among better-resourced firms.
Synthesis of theoretical arguments (diffusion theory, RBV) and case-based empirical observations; limited causal quantification in LMIC contexts in the reviewed literature.
AI adoption by SMEs in developing economies (illustrated using Botswana) can materially enhance operational efficiency, customer personalization, innovation capacity, and competitive advantage, supporting sustainable economic diversification — but meaningful uptake is constrained by skills, infrastructure, finance, and fragmented data governance.
Structured narrative literature review synthesizing empirical studies (case studies, surveys), conceptual frameworks, and policy reports; illustrative examples and contextual analysis focused on Botswana; no new primary causal estimates produced and sample sizes across cited studies are heterogeneous/unspecified.
Automation bias and changing work processes imply re‑skilling needs for public servants and potential shifts in public sector employment composition.
Findings and recommendations in multiple studies within the review documenting automation effects on workflows and workforce skill requirements (from the 103‑item corpus).
Predictive governance can change fiscal timing (earlier interventions) and alter uncertainty profiles for public budgets, requiring economists to model dynamic fiscal impacts and risks from algorithmic failure or bias.
Implication drawn in the review from case studies and economic reasoning present in the literature; recommendation for fiscal modeling based on synthesized evidence across the 103 items.
Interoperability and ethical‑by‑design requirements influence vendor lock‑in, competition, and the emergence of platform providers in markets for public‑sector AI solutions.
Policy and market analyses within the reviewed literature that link technical standards and ethical design requirements to market structure and vendor dynamics (synthesized from the 103 items).
Predictive analytics and AI enable anticipatory policy design (early intervention, forecasting), but they raise normative and governance questions about acceptable levels of prediction‑driven intervention.
Thematic findings from the review's mapping of predictive analytics use cases and accompanying ethical/governance discussions across the 103‑item corpus.
Human–AI interaction issues—such as automation bias and shifting public servant roles—affect decision quality and legitimacy, creating a need for human‑in‑the‑loop processes.
Multiple empirical and theoretical contributions in the reviewed literature identified automation bias and role shifts; recommendation for human‑in‑the‑loop emerges from synthesis of these studies.
Implementing strong transparency, explainability, and safety requirements increases initial compliance costs but builds trust and improves long-run adoption, avoiding costly recalls or litigation.
Regulatory economics argument supported by international precedents and literature cited in the review (comparisons to EU AI Act principles and other jurisdictions); this is a forward-looking policy-economic claim rather than a measured empirical result in Indonesia.
Firms can realize productivity gains from adopting LLMs, but net value depends on verification, security remediation, and IP-management costs.
Firm-level case studies and productivity measurements in the literature showing time savings but also nontrivial verification/remediation effort; synthesis emphasizes net effect conditional on costs.
Automation displaces some routine jobs but creates demand for roles in programming, data science, system maintenance, and higher‑order cognitive tasks.
Synthesis of labor‑market literature and sectoral case studies summarized in the review; relies on secondary empirical studies rather than new microdata analysis; sample sizes and study designs vary by referenced work.
Labor-market consequences will involve reallocation effects: routine-task automation, rising returns to managerial and technical skills, and potential within-firm wage dispersion.
Synthesis of labor economics theory and prior empirical work on automation; book recommends matched employer-employee panel studies to trace these effects but does not report such new panel results.
AI’s effects vary by industry, task composition, and firm capabilities; high-data, standardized-task sectors see faster, deeper impacts.
Cross-sector examples and theoretical arguments about task routineness and data intensity; calls for heterogeneity-aware empirical designs (e.g., difference-in-differences with staggered adoption).
Automation of routine tasks raises demand for cognitive, interpersonal, and technical skills; firms face reskilling needs and changing task allocation between humans and machines.
Task-level analytic framework and literature review on automation effects; book recommends empirical approaches (e.g., occupation and job-task data) to quantify these changes but does not present a single large empirical estimate.
Managers shift from routine decision execution to tasks involving oversight, interpretation, strategic design, and ethical stewardship of AI systems.
Qualitative case studies and literature review of task-level research; suggested task-analytic methods rather than reporting a specific empirical task dataset.
Investment in governance and training is a necessary cost to realize sustained returns from generative AI; these costs influence adoption timing and the distribution of benefits.
Conceptual argument from the review supported by case examples and economic reasoning about complementary investments.
There is a risk of wage polarization: increased returns to AI‑complementary skills and potential downward pressure on wages for automatable tasks.
Theoretical synthesis drawing on economic models of skill‑biased technological change and early empirical observations; no definitive causal wage studies reported.
Generative AI will drive occupational reallocation by substituting routine cognitive tasks while complementing higher‑order cognitive and monitoring skills.
Theoretical labor economics arguments synthesized with early empirical examples; no large‑scale causal labor market study provided in the review.
Routine, boilerplate, and debugging tasks are most automatable or complemented by LLMs, shifting value toward design, verification, and systems thinking.
Task-level analyses, observational studies, and synthesized findings showing larger gains on repetitive or templated tasks versus high-level design tasks.
Liability and intellectual-property ownership around AI-assisted code are unresolved practical and legal concerns.
Legal and policy analyses, practitioner reports, and qualitative interviews noting ambiguous legal frameworks and unresolved questions about ownership and liability for AI-assisted code.
A robust empirical pattern in the literature is that AI’s effects vary by skill level: displacement risk is concentrated among lower-skilled tasks while augmentation and wage gains are more likely for higher-skilled tasks.
Empirical findings and syntheses cited (Brynjolfsson et al., 2023; Chen et al., 2024) that report task- and skill-differentiated effects on employment and wages; evidence comprises cross-sectional exposure analyses and panel studies in the cited literature.
TGAIF clarifies where GenAI acts as a complement (augmenting consultant capability) versus where it risks substitution.
Conceptual distinction and mapping presented in the TGAIF derived from practitioner accounts; theoretical/qualitative, not empirically quantified across tasks.
TGAIF implies reallocation of work away from GenAI‑suitable subtasks (routine synthesis, drafting, summarization) toward tasks where human judgment and client interaction add most value.
Based on authors' inductive analysis of practitioner interviews describing which subtasks firms consider suitable for GenAI and which require human oversight; qualitative, not quantitatively tracked reallocation.
Aligning consulting tasks with generative-AI capabilities via a Task–GenAI Fit (TGAIF) framework can unlock substantial efficiency gains while containing key risks (notably hallucinations and loss of skill retention).
Inductive framework developed from qualitative, interpretive interviews with practitioners at leading German management‑consulting firms. The abstract does not report sample size, interview protocol, or quantitative validation; evidence is based on practitioner reports and the authors' synthesis.
AI substitutes for routine coding tasks but complements higher-order tasks such as system architecture, integration, and orchestration.
Interpretation from qualitative evidence at Netlight where practitioners used AI for routine chores while retaining control of higher-order design tasks; no quantitative task-time displacement data presented.
Human roles are shifting toward oversight, curation, specification, and orchestration of multiple AI components and tools.
Synthesized from practitioner descriptions and changing task allocations observed in the Netlight fieldwork (interviews/observations); no longitudinal measurement of role changes reported.
Welfare effects of democratized access to AI-assisted ideation are ambiguous: access could democratize innovation but also amplify low-quality outputs and misinformation absent proper curation.
Theoretical discussion and empirical examples of misinformation/low-quality outputs from LLMs cited in the review; no comprehensive welfare accounting provided.
Net gains in innovation from increased idea volume depend on complementary human capacity for curation and development; raw increases in ideas do not automatically translate into higher-quality innovation.
Synthesis noting studies where idea quantity rose but downstream quality or successful development did not necessarily increase; review highlights heterogeneity across workflows and dependence on human integration.
The most effective deployment model is a 'cognitive co-pilot' in which AI expands and challenges the idea space while humans provide curation, strategic evaluation, and experiential judgment.
Prescriptive conclusion drawn from synthesis of studies where human-AI collaboration (human curation/selection) produced better downstream outcomes than AI-alone outputs; evidence heterogenous and largely short-term.
Generative AI functions as a dual-purpose cognitive tool: a high-volume catalyst for divergent idea generation and a structured assistant for decomposing complex problems.
Nano-review / synthesis of existing empirical literature on LLM-assisted creativity and problem-solving, drawing on experimental ideation tasks, design/ideation studies, and applied case evidence; no original dataset or new experiments in this paper.