Evidence (4333 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Governance
Remove filter
Investment in governance and training is a necessary cost to realize sustained returns from generative AI; these costs influence adoption timing and the distribution of benefits.
Conceptual argument from the review supported by case examples and economic reasoning about complementary investments.
There is a risk of wage polarization: increased returns to AI‑complementary skills and potential downward pressure on wages for automatable tasks.
Theoretical synthesis drawing on economic models of skill‑biased technological change and early empirical observations; no definitive causal wage studies reported.
Generative AI will drive occupational reallocation by substituting routine cognitive tasks while complementing higher‑order cognitive and monitoring skills.
Theoretical labor economics arguments synthesized with early empirical examples; no large‑scale causal labor market study provided in the review.
Routine, boilerplate, and debugging tasks are most automatable or complemented by LLMs, shifting value toward design, verification, and systems thinking.
Task-level analyses, observational studies, and synthesized findings showing larger gains on repetitive or templated tasks versus high-level design tasks.
Liability and intellectual-property ownership around AI-assisted code are unresolved practical and legal concerns.
Legal and policy analyses, practitioner reports, and qualitative interviews noting ambiguous legal frameworks and unresolved questions about ownership and liability for AI-assisted code.
Token taxes reduce some geographic tax arbitrage relative to input taxes but do not eliminate cross-border avoidance; international coordination and trade/regulatory levers are crucial.
Political-economy analysis and recommendations in the paper; no international case studies or empirical coordination outcomes provided.
The framework quantitatively captures trade-offs between public-health outcomes and economic stability across macroscopic scenarios and different LLM backends.
Quantitative analysis reported across scenarios and model variants, tracking trade-off metrics between health (infection curves) and economic outcomes (aggregate activity). The summary notes cross-backend comparisons but does not report numerical effect sizes.
When coupled with an epidemic–economic model, the LLM-PDA framework robustly generates divergent macro trajectories across scenarios.
Coupled epidemic and economic modules in simulation; experiments run across diverse macroscopic scenarios (varying transmissibility, shocks, policy regimes) with metrics tracked at macro scale (infection prevalence over time, aggregate economic indicators). (Number of scenarios and runs not specified in summary.)
DAR implies changes to labor and contracting: reversible AI leadership reshapes task boundaries, demand for oversight skills, and should be reflected in contracts and procurement with explicit authority-reversal rules and audit obligations.
Theoretical/ normative argument in implications section; no empirical labor or contract data included.
AI substitutes for routine coding tasks but complements higher-order tasks such as system architecture, integration, and orchestration.
Interpretation from qualitative evidence at Netlight where practitioners used AI for routine chores while retaining control of higher-order design tasks; no quantitative task-time displacement data presented.
Human roles are shifting toward oversight, curation, specification, and orchestration of multiple AI components and tools.
Synthesized from practitioner descriptions and changing task allocations observed in the Netlight fieldwork (interviews/observations); no longitudinal measurement of role changes reported.
Short-run consumer gains from faster, cheaper service can be undermined by trust losses from hallucinations or perceived deception, reducing long-term consumer surplus.
Conceptual welfare analysis and cited case examples in the literature; no longitudinal consumer-surplus measurement provided in this review.
Conventional productivity metrics (e.g., handle time) may misstate value because they do not capture multi-dimensional impacts like quality and trust.
Conceptual critique and synthesis of measurement challenges discussed in the literature; no empirical measurement study presented in this review.
There is potential for substantial cost savings and throughput gains in repetitive, high-volume interactions, but these are offset by costs for integration, monitoring, and error remediation.
Industry case examples and conceptual cost–benefit reasoning aggregated in the review; the paper contains no new quantitative cost estimates or sample-based measurements.
Generative AI will substitute for routine service tasks while complementing skilled workers for escalations and complex problem solving, shifting labor demand toward supervisory and relationship-focused roles.
Economic and labor-market analyses synthesized in the review; projections are inferential and based on heterogeneous secondary sources, not primary labor-market experiments.
Full automation of customer service is suboptimal because persistent risks (hallucinations, contextual errors, lack of genuine empathy, integration complexity) remain; hybrid human–AI systems achieve the best outcomes.
Synthesis of documented failure modes and practitioner case examples from the literature; no primary experimental data or controlled trials in this review. Inference based on heterogeneous empirical reports and conceptual analyses.
Welfare effects of democratized access to AI-assisted ideation are ambiguous: access could democratize innovation but also amplify low-quality outputs and misinformation absent proper curation.
Theoretical discussion and empirical examples of misinformation/low-quality outputs from LLMs cited in the review; no comprehensive welfare accounting provided.
Net gains in innovation from increased idea volume depend on complementary human capacity for curation and development; raw increases in ideas do not automatically translate into higher-quality innovation.
Synthesis noting studies where idea quantity rose but downstream quality or successful development did not necessarily increase; review highlights heterogeneity across workflows and dependence on human integration.
The most effective deployment model is a 'cognitive co-pilot' in which AI expands and challenges the idea space while humans provide curation, strategic evaluation, and experiential judgment.
Prescriptive conclusion drawn from synthesis of studies where human-AI collaboration (human curation/selection) produced better downstream outcomes than AI-alone outputs; evidence heterogenous and largely short-term.
Generative AI functions as a dual-purpose cognitive tool: a high-volume catalyst for divergent idea generation and a structured assistant for decomposing complex problems.
Nano-review / synthesis of existing empirical literature on LLM-assisted creativity and problem-solving, drawing on experimental ideation tasks, design/ideation studies, and applied case evidence; no original dataset or new experiments in this paper.
Net value from generative AI is contingent: gains are largest where breadth of ideas and rapid iteration matter, and smaller or riskier where deep domain expertise, tacit knowledge, or high-stakes judgments are required.
Synthesis of heterogeneous empirical results showing task-dependent benefits; argument grounded in observed differences across lab and field contexts and documented limitations in domain-specific performance.
Generative AI raises measurable productivity (lower marginal cost per interaction) but introduces quality and trust externalities; optimal deployment balances these trade-offs.
Pilot cost analyses and operational reports showing lower marginal costs per interaction alongside documented quality/trust issues; primarily observational and model-based reasoning.
Full automation produces trade-offs unfavorable to complex service quality and trust; hybrid models with human-in-the-loop control are preferable.
Synthesis of case studies, pilot results, and conceptual reasoning comparing fully automated routing to hybrid/human-in-the-loop deployments; limited randomized comparisons.
Generative AI can materially improve customer service productivity through 24/7 automation, scalable personalization, and agent augmentation — but is not a substitute for humans.
Synthesis of deployments, pilot studies, vendor reports, and some experimental A/B tests described in the paper; no pooled sample size provided and much evidence is short-run or observational.
Data-driven HRM reinforces skill-biased technological change: routine HR tasks are being substituted by automation while demand rises for analytical and interpersonal skills.
Theoretical implication and synthesis across studies in the review noting automation of routine tasks and increased demand for analytic/interpersonal skills.
Blockchain and decentralized fintech tools could increase transparency and access to alternative assets for women, but practical adoption barriers remain.
Qualitative assessment of blockchain capabilities and uptake surveys / case studies cited in the article (product analyses and early adoption data; no large‑scale causal evidence).
Governance reduces downside risk (compliance fines, outages) but raises implementation costs; economic assessments must weigh risk-adjusted returns.
Conceptual economic argument in the paper; supported by reasoning and practitioner experience but not by empirical cost–benefit studies within the article.
When evaluating GenAI investments, firms should treat prompt-fraud controls and monitoring as persistent operating costs rather than one-time setup costs.
Practical recommendation informed by conceptual cost and governance analysis; not supported by longitudinal cost studies in the paper.
Smaller firms or departments using shadow AI may realize productivity gains but face outsized fraud exposure due to weaker controls.
Theoretical trade-off analysis in the implications section; no empirical firm-level comparisons or experiments presented.
Safer scaling of automation may increase substitution of routine ERP/CRM tasks while governance and oversight roles create complementary high-skill positions (e.g., compliance engineers, auditors, prompt engineers).
Labor-market implications presented as theoretical reasoning based on how governance and automation interact; informed by practitioner observation but not empirically tested in the paper.
Overall, secure and resilient cloud infrastructure supported by SECaaS facilitates broader and safer diffusion of AI but creates economic trade-offs (market concentration, externalities, liability) that require empirical study and policy responses.
Synthesis of the chapter's literature review, case studies, and theoretical arguments; calls for empirical methods (regressions, event studies, structural models) to quantify effects.
Outsourcing via SECaaS shifts demand from in-house security labor to vendor-side security professionals, altering labor market composition and geographic distribution of expertise.
Labor-market reasoning and some survey evidence on outsourcing trends; chapter recommends empirical study (e.g., labor data, regional analyses) but does not present a specific dataset.
Tools such as secure enclaves, differential privacy, federated learning, and MPC influence the feasibility and cost of privacy-preserving AI; SECaaS providers offering these capabilities can change competitive dynamics.
Technical literature and vendor feature sets describing these technologies; theoretical implications for cost and competition discussed in the chapter.
Cyber insurance markets interact with SECaaS adoption; insurers may incentivize or require specific controls, altering firms’ security choices and underwriting practices.
Industry reports on cyber insurance requirements, surveys of insurer underwriting practices, and theoretical interaction effects; empirical analyses proposed (linking adoption to premiums).
Network effects in threat intelligence and telemetry can lead to winner-take-most outcomes but also increase the social value of shared defenses.
Theoretical arguments about network effects and empirical observation of aggregation benefits in threat-sharing initiatives; literature on public-good aspects of shared threat intelligence.
Pricing and contract design of SECaaS shape firm investment in complementary capabilities (data governance, secure model deployment).
Theoretical economic arguments and structural market models suggested in the chapter; empirical tests proposed (e.g., regressions, structural estimation) but no definitive empirical sample presented.
Decentralized governance can foster a more pluralistic ecosystem but may produce fragmentation and underinvestment in public‑goods data infrastructure.
Inferential implication based on U.S. texts showing plural institutional actors and literature on decentralized governance trade‑offs; not empirically measured in this study.
Decentralized, rights‑based regimes (e.g., U.S.) may preserve individual and institutional controls that can increase transactional frictions but support market entry via clearer procedural safeguards.
Inferential implication from the U.S. policy texts' emphasis on rights, transparency, and procedural safeguards; based on coded document content rather than observed market outcomes.
Centralized, sovereignty‑oriented regimes (e.g., China) may enable large, state‑facilitated data aggregation projects that lower data costs for favored actors but restrict cross‑border flows and outsider access.
Inferential implication drawn from the Chinese policy texts' developmentalist and techno‑sovereignty framing together with literature on state‑led data aggregation (no empirical measurement of outcomes in this study).
Openness and security are better understood as co‑evolving, layered institutional processes rather than strict, mutually exclusive binaries.
Conceptual synthesis grounded in the document coding results and an extension of modular coordination theory developed in the paper.
Urbanization and biodiversity loss alter host–pathogen dynamics in ways that affect pediatric infection risk.
Ecology and urban-health literature synthesized narratively; observational and theoretical studies referenced without pooled effect-size estimates.
Schools would likely change procurement practices to favor vendors who can certify compliance or offer contractual warranties, increasing demand for compliance services and raising transaction costs in procurement.
Predictive policy/economic argumentation grounded in procurement behavior theory; no empirical procurement dataset provided.
Vendors will likely assert defenses that they are mere contractors or third parties and not 'recipients'; the Article addresses these defenses by showing how federal funds and control relationships can bring vendors within the statutes’ reach.
Anticipatory doctrinal rebuttals based on precedent and statutory interpretation; analysis of common contractor doctrines in administrative law (no empirical testing).
Emerging AI-driven strain optimization reduces design costs and may concentrate advantage with firms holding large proprietary datasets and compute resources, creating platform effects.
Economic argument supported by observed uses of proprietary datasets and ML in reviewed technical studies, and conceptual analysis of platform economics and data-driven advantage discussed in the paper.
Algorithmic credit scoring and AI can improve risk assessment but may encode historical biases or use proxies that disadvantage marginalized groups.
Synthesis of empirical examples and methodological literature on machine learning in credit scoring; the paper recommends audit methods but does not present new model evaluations.
The overall social outcome of FinTech adoption depends on technological capabilities, institutional quality, and regulatory design.
Analytical framing and political-economy model presented in the literature review; supported by cross-case comparisons rather than new empirical estimation.
AI-enabled macro and fiscal models can improve policy testing and contingency planning but require transparency, validation, and safeguards against overreliance.
Conceptual argument and illustrative examples; no empirical trials or model performance metrics reported.
AI shifts the locus of economic governance from static rules to living systems that anticipate shocks and adapt in real time.
Policy-analytic framing and scenario-based reasoning within the book; supported by illustrative examples rather than empirical measurement.
White‑box mandates can constrain some high‑performance black‑box models and thereby incentivize research into explainable AI and new feature-engineering approaches compatible with rights protections.
Argument in "Innovation vs. compliance tradeoffs" linking regulatory constraints to R&D incentives; theoretical reasoning without empirical validation.
Enforced non‑discrimination and explainability requirements may change model design (fewer opaque proxies, constrained feature use), altering risk assessment and possibly increasing measured lending costs in the short run.
Theoretical modeling of model-design incentives and pricing effects in the compendium; no empirical estimation provided.