Evidence (7953 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Firms will need to invest in new control technologies, governance structures, and personnel (AI auditors, red teams), increasing the total cost of GenAI adoption.
Economic reasoning and implications section; no empirical cost estimates or survey data; projection based on anticipated control needs.
Malicious insiders, external actors (vendors, consultants, customers), shadow AI (unsanctioned consumer-grade GenAI use), and supply-chain/third-party prompt templates are plausible attack vectors for prompt fraud.
Threat taxonomy and scenario mapping with case-style examples; conceptual identification of actors rather than documented incident attribution.
Poor logging, weak prompt governance, and over-reliance on machine-generated artifacts increase organizational vulnerability to prompt fraud.
Control gap analysis and prescriptive argumentation; examples of weak controls used to illustrate exploitability; no empirical measurement of effect sizes.
Because prompt fraud operates at the linguistic/procedural surface rather than the network/technical surface, existing control frameworks are ill-prepared to address this new attack surface.
Control gap analysis comparing conventional internal controls to the linguistic attack surface; conceptual rather than empirical evaluation.
Upfront governance costs (policy, tooling, staff) become a key part of adoption cost and affect ROI calculations and payback periods for automation investments.
Economic reasoning and implications discussed in the paper; no empirical cost data provided—recommendation based on practitioner experience and theoretical cost accounting.
Traditional automation governance is often ad hoc, underestimates security and compliance risks, and does not scale safely for mission-critical enterprise systems.
Synthesis of industry best practices and practitioner-sourced lessons (qualitative observations and case illustrations). No systematic survey or quantitative incidence rates provided.
Prompt fraud reduces the marginal cost of producing convincing fraudulent artifacts, which may increase fraud frequency and expected losses absent mitigations.
Economic reasoning and conceptual modeling of incentives; no empirical estimates of frequency or losses included.
Lack of prompt provenance, versioning, and validation practices increases organizational exposure to prompt fraud.
Conceptual analysis and recommended controls (provenance/versioning) drawn from audit-framework comparisons and threat modeling.
There is insufficient logging/traceability of prompts, responses, and model versions in many workflows, creating a control weakness for detecting prompt fraud.
Observations from literature/regulatory review and the paper's threat/control mapping; asserted as a common operational gap (no systematic measurement).
Shadow AI — unsanctioned, decentralized use of GenAI tools — amplifies prompt-fraud risk by bypassing central controls and audit trails.
Conceptual analysis and organizational risk reasoning; references to common practices of unsanctioned tool use (no empirical prevalence data).
External actors can commit prompt fraud via customer-facing systems or social-engineering prompt chains.
Conceptual threat scenarios and mapping of attack surfaces (customer-facing interfaces, input channels); illustrative examples provided.
Internal actors manipulating prompts within authorized AI workflows are a realistic and important threat vector for prompt fraud.
Threat modeling and scenario-based analysis highlighting insiders with authorized access who can craft prompts.
Prompt fraud can defeat controls that rely on plausibility, standard formatting, or human review that trusts model-like language.
Threat mapping and literature on automation bias; illustrative vignettes showing how machine-like outputs mimic authoritative formats.
Prompt fraud lowers the entry cost of producing convincing fraudulent artifacts, increasing the ease with which attackers can create plausible forgeries.
Economic reasoning and conceptual analysis based on GenAI behavior and illustrative scenarios (no empirical cost or frequency data).
Prompt fraud — the intentional manipulation of natural-language prompts to cause generative AI systems to produce misleading, fabricated, or deceptive artifacts that bypass internal controls — constitutes a novel, low-cost fraud vector that traditional IT- and process-focused controls are ill-equipped to detect or prevent.
Conceptual analysis and threat modeling grounded in literature/regulatory review and illustrative vignettes; no systematic empirical incidence data provided.
Secure infrastructure (including SECaaS-provided tools) affects the availability and trustworthiness of AI training data and models; breaches reduce returns to AI R&D via direct losses and reduced trust.
Conceptual linkage supported by case studies of data/model theft and technical literature on secure enclaves, differential privacy, federated learning; no broad quantitative estimate provided.
Security externalities (one firm's breach raising ecosystem risk) complicate private incentives and may justify policy interventions such as standards or mandatory reporting.
Economic theory on externalities, case studies showing spillovers from breaches, and policy analyses recommending interventions.
Concentration among large cloud/SECaaS providers can create market power, platform dependency, and affect competition in AI markets.
Market-structure theory, observed concentration patterns in industry reports, and qualitative case studies; no causal estimates provided in the chapter.
Latency and integration frictions can limit the suitability of SECaaS for specialized workloads, including some AI pipelines.
Technical evaluations and benchmarks that measure latency/resource overhead; reports and case studies noting integration challenges for high-throughput or low-latency workloads.
Reliance on a small set of major cloud/SECaaS providers creates vendor lock-in, concentration risk, and systemic vulnerability if a major provider is compromised.
Market-structure discussions, observed provider outages and incidents (case studies), and theoretical arguments about concentration; no single causally identified empirical estimate provided.
Without improvements in robustness, consistency, and neuroscientific validity of explanations, clinical uptake will be constrained, slowing commercialization and reducing returns for developers focused only on performance.
Synthesis and forward-looking argument linking methodological deficits documented in the literature to likely reduced market adoption; no direct empirical market impact measurement provided.
Weak or inconsistent explanations increase regulatory and medico-legal risk; standardized, validated XAI can lower compliance costs and liability exposure.
Logical inference connecting explanation reliability to regulatory scrutiny and liability concerns, presented as an implication in the review (no direct empirical legal analysis provided).
Preprocessing pipelines (filtering, artifact removal such as ICA, re-referencing, segmentation) materially affect XAI outputs.
Review cites multiple studies and methodological notes showing explanation maps vary with preprocessing choices; effect reported qualitatively across papers.
There is a scarcity of human/clinical validation studies testing whether explanations improve clinician decision-making or align with clinical reasoning.
Observation from literature survey: few reviewed works include clinician studies or longitudinal/clinical impact evaluations.
Identified methodological limitations include sensitivity of explanations to hyperparameters and preprocessing choices, inconsistent explanations across similar inputs, and poor correlation with known neurophysiology.
Synthesis of reported failure modes and limitations from multiple EEG-XAI studies reviewed in the paper.
Most studies focus on qualitative visualizations (e.g., heatmaps) rather than quantitative, reproducible metrics for explanation quality; few evaluate neuroscientific validity or clinical usefulness, and robustness to noise and preprocessing is often untested.
Review-level assessment of evaluation practices across papers, noting prevalence of visual inspection and scarcity of standardized quantitative metrics or clinical validation.
Current explainability methods for EEG frequently lack robustness, consistency, and alignment with neuroscientific knowledge, limiting their trustworthiness and practical utility.
Aggregate observations from reviewed EEG-XAI studies noting inconsistent attributions, sensitivity to analysis choices, and few studies that validate explanations against neuroscientific markers or clinical endpoints.
Divergent governance regimes increase the risk of data localization, interoperability frictions, and regulatory fragmentation — raising costs for multinational AI development and limiting global model generalizability.
Policy‑level comparative inference from contrasting national approaches identified in the document analysis and related literature on cross‑border data governance; no direct measurement of costs or model generalizability in the paper.
State‑led coordination can rapidly mobilize resources and scale national champions, altering competitive dynamics and potentially creating winner‑take‑most outcomes.
Theoretical inference from document evidence of state mobilization and developmentalist goals in Chinese texts, combined with literature on state coordination and industrial scaling (no empirical competition measures in the paper).
AI systems trained on incomplete, adult-centric, or high-income–biased data risk perpetuating inequities in prediction, resource allocation, and policy recommendations for children and LMICs.
Data-justice and algorithmic fairness literature cited conceptually in the review; applies generalizable concerns about biased training data to the One Health/child-health context without empirical bias audits in this paper.
Data gaps, especially child-specific and cross-sectoral One Health data, reduce the reliability and fairness of AI-driven disease prediction and economic models.
Methodological argument grounded in the review of data availability; authors connect observed surveillance gaps to model limitations—no empirical model performance analyses presented.
Fragmented governance and funding structures hinder cross-sectoral prevention and response for child-centered One Health challenges.
Policy analyses and governance literature synthesized in the review; narrative evidence of siloed funding and governance limiting cross-sector action (no quantitative governance metrics provided).
Integrated One Health research and policy implementation are limited—particularly in LMICs—creating gaps in prevention and response for children.
Policy, programmatic, and academic literature reviewed; authors note under-representation of LMIC contexts and limited cross-sectoral integration in the published literature and surveillance systems.
Geographic ranges of many vectors and zoonoses are shifting (due to climate and land-use change), increasing children's exposure in new areas.
Ecological and epidemiological modeling studies and surveillance trends cited in the review indicating range shifts for some vectors/zoonoses; evidence is region- and agent-specific and heterogeneously reported.
Extreme weather events amplify children's exposure to pathogens and degrade health infrastructure and services.
Disaster and public-health case studies and surveillance reports summarized in the review documenting post-event increases in infectious disease exposure and disruptions to services; narrative evidence, context-dependent.
Climate change intensifies direct harms to children (heat injury, extreme weather injury) and indirect harms (food insecurity, mental health impacts, shifting disease ecologies).
Climate-health literature and sectoral reports synthesized; references to observational studies and modeling showing associations between climate events and the listed harms (no pooled effect sizes).
Pediatric and neonatal AMR pose distinct clinical and surveillance challenges compared to adult AMR.
Clinical literature and surveillance reports synthesized in the review highlighting differences in pathogen spectra, dosing, diagnostics, and reporting for pediatric/neonatal populations; narrative description without quantitative synthesis.
Children are disproportionately exposed to antimicrobial-resistant pathogens via clinical care, community transmission, food chains, and environmental contamination.
Synthesis of clinical studies, community surveillance reports, food-safety literature, and environmental microbiology studies; review notes pediatric and environmental sources but provides no pooled prevalence estimates.
Children's dependence on caregivers and local ecosystems (for nutrition, shelter, sanitation) increases vulnerability to ecosystem-level shocks.
Social and public-health literature integrated in the review describing caregiver-mediated dependence and ecosystem service reliance; qualitative and observational evidence rather than quantitative pooled estimates.
Children are uniquely vulnerable within the One Health nexus because physiological immaturity, developmental sensitivity, behavior-driven exposures, and ecosystem dependence make them disproportionately affected by AMR, climate change, and emerging zoonotic/vector-borne infections.
Narrative synthesis of interdisciplinary peer-reviewed studies, surveillance reports, and policy literature; biological and epidemiological reasoning rather than a pooled quantitative analysis; heterogeneous and cross-disciplinary evidence summarized by the authors.
Holding schools liable under federal civil‑rights statutes is sometimes possible but often insufficient to prevent or remediate harms caused by EdTech products.
Policy argumentation and doctrinal analysis with hypotheticals and illustrative cases demonstrating enforcement limitations when only schools are targeted (no empirical prevalence data).
Resource-rich labs and firms are likely to adopt LLM orchestration faster, which could widen gaps in research capacity between institutions and countries unless mitigated by policy choices.
Equity and diffusion argument based on resource requirements (compute, data, validation); no adoption-rate data or cross-institution comparisons provided.
There is potential for 'winner-take-most' market outcomes if a few players combine superior models, instrument control software, and exclusive datasets.
Economics reasoning about network effects and data concentration; no empirical market concentration metrics specific to microscopy provided.
Upfront investments required for compute, data labeling, validation, and safety testing may raise entry costs and favor incumbents.
Economic logic about fixed costs and scale advantages; no measured entry-cost or firm-dynamics data provided.
There is a risk of deskilling for some technical roles, creating implications for training and workforce development.
Theoretical reasoning about automation-induced deskilling; no empirical study or measured skill changes provided.
There is a nonlinear 'Digital Exclusion Trap': fiscal support is ineffective or harmful in places below a critical level of digital infrastructure.
Nonlinear/threshold tests and heterogeneous-effect analyses in the DID framework showing that treatment effects on cultural employment vary by digital infrastructure level, with null or negative effects below an estimated threshold (analysis on 280 cities, 2008–2021).
Stronger negative sentiment (measured by aggregated VADER scores of complaint narratives) is significantly associated with near-term stock price declines.
VADER sentiment applied to individual complaint narratives then aggregated to firm–month sentiment; fixed-effects panel models find statistically significant negative relationships between more negative aggregated VADER scores and subsequent abnormal returns across the 261-firm monthly sample (2018–2023).
Optional LLM access without training was associated with shorter written answers compared with no LLM access.
Measured answer length in the randomized trial (n = 164); comparison between untrained optional-access arm and no-access arm showed shorter answers in the untrained-access group.
Regulatory frameworks often lack tools for algorithmic accountability, data portability, and cross-border enforcement for platformed services.
Policy and regulatory studies reviewed in the paper; assessment based on gap analysis rather than new regulatory audit data.
Algorithmic bias—stemming from training data, feature selection, or proxy variables—can produce systematic discrimination (for example, gendered access to credit).
Reviewed empirical and methodological studies on algorithmic fairness; paper cites documented instances and outlines mechanisms but does not present original audit data.