Evidence (4049 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Governance
Remove filter
Rapid skill obsolescence in AI necessitates frequent curriculum updates and responsive governance.
Identified as a risk: the paper notes AI skill change rates and recommends frequent updates and governance mechanisms. This aligns with general domain knowledge; the paper does not provide empirical measurement of obsolescence rates.
Aligning multiple standards is complex, posing a disadvantage and implementation risk.
Stated explicitly in Disadvantages/Risks: complexity of aligning multiple standards is listed. This is a reasoned observation in the paper rather than empirically demonstrated.
Implementing this framework requires significant resources and continuous updating.
Stated explicitly under Main Finding and Disadvantages/Risks; paper lists cost/time metrics to track (cost-per-curriculum, time-to-update) and highlights resource intensity. Support is descriptive/analytic rather than empirical.
Algorithmic bias, unequal digital financial literacy, caregiving time constraints, and limited access to personalized solutions can sustain or reproduce gender investment gaps if not addressed.
Synthesis of literature on barriers to financial inclusion and AI fairness concerns, plus platform report observations (review of empirical and conceptual studies; not a single empirical test).
Women statistically exhibit greater risk aversion in some settings compared with men.
Summary of empirical survey and experimental studies on gender differences in risk attitudes discussed in the review (multiple cross‑sectional and lab/field experiments referenced).
The digital divide (lack of reliable electricity and connectivity) constrains adoption of MIS and AI, creating geographic and regional inequities in who benefits from the framework.
Infrastructure constraint argument presented in the paper; no quantified coverage maps or population-level access statistics included.
AI-driven equivalency systems carry risks including algorithmic bias, opaque decisions without explainability, and potential reinforcement of inequities when training data under-represents some regions/institutions.
Risk assessment drawing on established AI ethics literature; no empirical bias audit from the proposed system is provided.
The major disadvantage of an MIS is dependency on reliable electricity and internet, creating systemic vulnerability due to the digital divide.
Paper notes infrastructure dependency as a constraint; assertion grounded in common infrastructural realities but no measured connectivity or outage statistics from DRC/SA are provided.
Key audit/control weaknesses with respect to prompt fraud include lack of provenance for inputs/prompts and model outputs, inadequate access controls, and missing or ineffective monitoring and anomaly detection for AI outputs.
Qualitative control analysis and adaptation of established auditing principles to GenAI workflows; recommendations based on threat modeling rather than field data.
GenAI outputs can be tailored to mimic corporate styles, templates, and evidence artifacts (e.g., summaries, memos, audit trails), which increases their credibility to auditors, managers, or customers.
Illustrative examples and scenario mapping demonstrating templated output mimicry; no controlled experiments or corpus analysis provided.
Large language models produce fluent, human-like outputs that can mask falsehoods (hallucinations) as facts, making prompt fraud effective.
Well-established LLM behavior cited conceptually and supported in the paper by illustrative examples; no new empirical measurement in this article.
Prompt fraud does not require system intrusion, credential theft, or software exploits; it operates at the reasoning/language layer of large language models and therefore can be executed without technical breaches.
Logical/technical argumentation built from properties of LLMs and illustrative hypothetical attack chains; threat modeling rather than empirical attack logs.
Prompt fraud is a new, distinct fraud modality in which adversaries intentionally craft natural-language prompts (or manipulate prompt inputs) to steer generative AI outputs into producing misleading, fabricated, or compliance-evading artifacts that bypass traditional internal controls.
Conceptual definition presented by the paper based on threat taxonomy and scenario mapping; illustrated with case-style examples. No empirical incident dataset or prevalence statistics provided.
Potential limitations include limited methodological detail on case selection and measurement, possible selection and reporting bias from practitioner-sourced examples, and variable generalizability to small firms or highly regulated industries.
Authors' self-reported limitations in the Methods/Limitations section (qualitative assessment).
Prompt fraud exploits the natural-language interface of large language models (LLMs) to produce outputs that appear authoritative (reports, audit trails, explanations) without system intrusion, credential theft, or software exploitation.
Definition and threat-model description using conceptual examples and case vignettes; literature/regulatory review to position the threat relative to traditional fraud vectors.
Data privacy and cross-border compliance issues arise from using cloud and SECaaS, complicating legal compliance for firms.
Regulatory analyses and compliance reports; documented examples in case studies and industry guidance on cross-border data flows.
The cloud shared responsibility model creates potential ambiguities in liability between providers and customers.
Regulatory guidance, legal analyses, and documented post-incident case studies showing confusion over responsibilities.
China manages the openness–security trade-off through a centralized, developmentalist, techno‑sovereignty approach that privileges coordinated state direction and control.
Qualitative content analysis of national‑level policy texts: 18 Chinese policy documents coded across four analytical dimensions (coordination objectives, institutional actors, governance mechanisms, stakeholder legitimacy).
Antibiotic use in humans and animals, along with environmental antibiotic residues, generates converging selection pressures that drive AMR relevant to children.
Well-established ecological and microbiological literature summarized in the review showing cross-sector selection pressures; narrative integration rather than new empirical analysis.
Child behaviors (hand-to-mouth activity, play, outdoor exposure) increase contact with environmental and animal reservoirs and therefore exposure risk.
Behavioral and exposure studies synthesized narratively; observational evidence from exposure assessments and pediatric environmental health studies cited in review (no meta-analysis).
Developmental windows imply early-life exposures can have long-term consequences for health and human capital.
Developmental and epidemiologic literature integrated in the review; narrative citations of studies linking early exposures to later health and cognitive outcomes (no single longitudinal dataset presented).
Physiological and immunological immaturity (including neonatal risks) increases children's susceptibility to infectious disease and related harms.
Established biological and clinical literature synthesized in the review; references to neonatal clinical risks and immunological immaturity across pediatric literature (no pooled effect sizes reported).
Automation and LLM-driven orchestration add opacity; errors in instrument control or analysis could propagate quickly, raising liability, insurance, and reproducibility concerns.
Analytical discussion of risks and analogies to automated systems in other domains; no incident-level empirical data from microscopy given.
Ethical and governance issues related to LLM-driven microscopy include accountability, reproducibility, access inequities, data privacy, and concentration of capabilities in large providers.
Policy-oriented synthesis and analogies to governance challenges observed in other AI deployments; no new empirical measurement in microscopy contexts.
Integration of LLMs with microscopes faces challenges including safety and reliability of instrument control, verification of scientific outputs, data provenance, and alignment with experimental constraints.
Analytical discussion based on known reliability and safety issues in automated systems and AI tool use; no empirical incident data from microscopy provided.
There is substantial uncertainty in economic forecasts due to possible scale-up failures, regulatory constraints, feedstock price volatility, and path‑dependent lock‑in effects.
Synthesis of technical failure modes, regulatory uncertainty, and sensitivity analyses reported in TEA/LCA literature and economic modeling sections of the review.
Regulatory and biosafety concerns (including environmental release risks and dual‑use issues) increase fixed costs and create entry barriers that shape industry structure and diffusion.
Policy and governance literature reviewed alongside technical case studies; citations of regulatory requirements, biosafety frameworks, and examples of compliance costs affecting project viability.
Engineering and economic challenges—scale‑up hurdles, process robustness, feedstock cost, and downstream purification—limit industrial deployment of many bio-based processes.
Case study TEA/LCA summaries and process reports in the review highlighting scale-up failures or increased costs at larger scales, purification complexity for low‑concentration products, and sensitivity to feedstock prices.
Technical biological limitations—metabolic burden, pathway crosstalk, byproduct formation, and genetic instability—remain major constraints on strain performance and scalability.
Multiple experimental reports and method papers cited in the review documenting decreased growth/productivity due to engineered pathway burden, unintended interactions between pathways, accumulation of byproducts, and genetic mutations during production runs.
Empirical validation is concentrated on the Agora-12 corpus; generalizability to other architectures, scales, or deployment contexts is unproven and identified as a limitation.
Authors' own limitations section and scope of empirical tests (analyses limited to Agora-12 and four clinical cases).
Platforms benefit from data-driven scalability and network effects, creating barriers to entry and affecting consumer surplus, innovation incentives, and pricing.
Economic theory of platforms and empirical cases from platform markets synthesized in the literature review; argument supported by secondary empirical studies cited.
Market concentration and network effects create platform power that may squeeze smaller providers, raise costs, or lock users into ecosystems.
Platform economics literature and case examples reviewed in the paper; conceptual and theoretical support with illustrative empirical instances from secondary sources.
Infrastructure gaps (connectivity, electricity, identity systems) limit who benefits from digital finance.
Cross-country and development literature synthesized in the paper highlighting correlations between infrastructure availability and digital finance uptake; no primary empirical analysis in the paper.
Implementing the governed hyperautomation pattern raises upfront costs (governance tooling, monitoring, validation, compliance processes).
Economic and cost-structure discussion in the paper, based on qualitative reasoning and industry experience; no quantified cost estimates or sample-based cost analysis provided.
Use of standardized (non-adaptive) dialogues limits ecological validity relative to live adaptive chatbots.
Limitations section acknowledges that standardized (non-adaptive) experimental dialogues reduce ecological validity compared with live/adaptive chatbot interactions.
Platform KPIs (e.g., eCPM) can diverge from social welfare metrics (consumer surplus, privacy harms), creating metric misalignment.
Conceptual critique with examples of common platform metrics versus welfare economics; not accompanied by a quantitative comparison dataset.
Privacy constraints reduce observability and necessitate privacy-preserving study designs that complicate estimation.
Methodological analysis referencing differential privacy, federated learning and their effects on statistical power/observability; no experimental power analyses with sample sizes presented here.
Data access asymmetries (platforms holding proprietary logs) limit external auditability and replication of advertising research.
Empirical and institutional observation about industry data practices; supported by calls for privacy-preserving shared datasets in the paper; no quantified survey sample included.
Attribution complexity — multi-touch, cross-device, and delayed conversions — confounds causal inference in advertising measurement.
Methodological discussion referencing causal inference challenges and standard problems in attribution; widely-documented in the literature though not re-measured in this paper.
Complex automated systems make attribution and responsibility harder when harms occur (Automation vs accountability trade-off).
Qualitative institutional analysis and case-study reasoning about multi-agent automated pipelines and opaque model decisions; no single empirical incident dataset provided.
Richer personalization depends on granular data and cross-device identity, creating privacy externalities and compliance risks (Personalization vs privacy trade-off).
Data source inventory and privacy literature review; supported by observational industry trends (move to first-party identity) rather than a quantified sample in the paper.
Federated infrastructures introduce adversarial risks (model/data poisoning, inference attacks on updates) that require robust aggregation, anomaly detection, and other defenses.
Threat modeling and taxonomy of adversarial/privacy threats with mapped mitigations (robust aggregation, anomaly detection, DP). Evidence is conceptual and based on standard threat frameworks; no empirical attack/defense experiments reported at scale.
Delayed and sparse feedback (clicks/conversions) in advertising complicates credit assignment and timely model updates, degrading learning unless specific methods for delayed/sparse signals are used.
Analytical discussion of learning dynamics with delayed/sparse labels; conceptual solutions suggested (credit assignment methods). No large-scale empirical evaluation presented.
Non-IID and heterogeneous data distributions across devices and publishers impair convergence and degrade personalization unless addressed with algorithmic adaptations.
Analytical modeling of convergence under non-IID conditions; threat/robustness discussion; prototype/simulation illustrations. This claim is supported by established literature and the paper's analytic treatment.
AI automates routine and some mid-skill tasks, reducing employment in those occupations.
Empirical task-based exposure measures mapping AI capabilities to occupational task content, microdata analyses of employment by occupation using household/employer/administrative datasets, and panel regressions/decompositions that document within-occupation declines and between-occupation shifts.
Relying on secondary literature limits the paper's ability to make causal inferences and constrains empirical generalizability to all sectors or countries.
Stated limitations in the paper's Data & Methods section acknowledging scope and inferential constraints.
Increases in K_T reduce employment levels in affected firms and industries even when aggregate productivity rises.
Panel econometric estimates at firm and industry levels relating K_T intensity to employment outcomes, controlling for demand, input prices, and firm characteristics; difference-in-differences specifications and instrumental-variable robustness checks; corroborated by sectoral case studies.
Rising technological capital (K_T) — proxied by robot/automation density, software and intangible capital accumulation, AI adoption surveys, and AI-related patenting — leads to a decline in labor’s share of output.
Firm- and industry-level panel regressions linking constructed K_T intensity measures to labor shares, supported by macro growth-accounting decompositions; robustness checks include difference-in-differences and instrumenting adoption with plausibly exogenous shocks (e.g., cross-border technology diffusion, trade shocks); validated with cross-country comparisons and case studies.
Fuel subsidy reform imposed an enormous fiscal burden that peaked at 2.8% of GDP in 2022, limiting the macroeconomic leverage of AI-driven efficiency gains.
Reported fiscal statistic in the paper (2.8% of GDP in 2022) and its role in analysis of why AI savings do not translate into large macro gains.
The oil and gas trade balance remained in deficit at -1.55 billion USD in May 2025 and -1.58 billion USD in July 2025 despite an overall national trade surplus.
Reported trade-balance figures in the paper (monthly trade statistics for May and July 2025).