Evidence (3492 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 736 | 1615 |
| Governance & Regulation | 664 | 329 | 160 | 99 | 1273 |
| Organizational Efficiency | 624 | 143 | 105 | 70 | 949 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 348 | 109 | 48 | 322 | 836 |
| Output Quality | 391 | 120 | 44 | 40 | 595 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 275 | 143 | 62 | 34 | 521 |
| AI Safety & Ethics | 183 | 241 | 59 | 30 | 517 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 105 | 40 | 6 | 187 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 78 | 8 | 1 | 151 |
| Regulatory Compliance | 69 | 64 | 14 | 3 | 150 |
| Training Effectiveness | 81 | 15 | 13 | 18 | 129 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Innovation
Remove filter
Digital financial technologies (online trading platforms, commission‑free brokers, fractional shares, and mobile apps) lower entry barriers and make investing more accessible to women who were previously underrepresented in markets.
Synthesis of platform feature descriptions and cross‑sectional platform usage studies cited in the literature review (observational comparisons of user demographics on retail platforms; no single pooled sample size reported).
SECaaS lowers fixed-cost barriers for firms to adopt secure cloud infrastructure and AI services, enabling smaller firms to participate in AI deployment.
Economic reasoning supported by cost–benefit analyses and surveys of adoption patterns; proposed empirical methods (cross-sectional/panel regressions) recommended to validate.
Governance and policy levers (SLAs, incident response plans, certifications, audits, regulation) are essential complements to technical security solutions.
Policy literature, industry best practices, and case studies showing improved outcomes when governance mechanisms are used alongside technical controls.
SECaaS can offer potential cost savings relative to building internal teams and tools, particularly for small and medium enterprises (SMEs).
Cost–benefit analyses and vendor pricing comparisons cited in industry reports; survey evidence on security spend allocation (heterogeneous findings across studies).
SECaaS gives firms access to specialized expertise and up-to-date threat feeds they might not maintain internally.
Vendor offerings and industry analyses; surveys reporting reliance on external expertise and threat intelligence services.
SECaaS provides scalability and rapid deployment of new defenses compared with building equivalent in‑house capabilities.
Industry reports and vendor benchmarks on deployment times and scalability; case studies and surveys of firm experiences (no single pooled sample size reported).
Processing and using 3D volumetric data requires substantial storage and GPU/TPU compute, creating demand for cloud compute services and managed ML platforms.
Authors note the resource requirements of 3D volumetric data processing as a practical consideration; general technical knowledge supports this claim though no resource-consumption measurements are provided in the paper.
The dataset and its standardization are intended to support automated segmentation, landmarking, feature extraction, and benchmarking for computer-vision and ML methods on biological 3D data.
Authors describe the acquisition and metadata design as 'automation-ready' and suitable for downstream automated/ML workflows.
Phenomic (3D scans) data are linked/paired to ongoing genome sequencing projects to create multimodal phenome–genome resources.
Paper reports links to genome projects where available and describes pairing of phenomic data with genome sequencing efforts.
Sampling is global and broadly covers ant phylogeny.
Authors state global sampling and intended phylogenetic breadth; taxonomic counts across genera/species presented to support breadth.
Legitimacy economies matter: public trust and stakeholder legitimacy influence willingness to share data and participate in collaborative research, with direct economic consequences for data‑intensive innovation.
Argument grounded in coded references to stakeholder legitimacy in the documents and theoretical literature linking legitimacy/trust to participation; the paper does not present empirical measures of trust or sharing behavior.
Extending civil‑rights liability to vendors provides a clear regulatory signal that discrimination risks in algorithmic systems are materially consequential, which could spur broader governance practices across AI product markets.
Policy argument about regulatory signaling effects; theoretical, not empirically tested in the Article.
Treating vendors as recipients would internalize externalities by shifting responsibility for discriminatory harms from schools onto EdTech firms, aligning private incentives with nondiscriminatory product design.
Policy and economic reasoning (theoretical argumentation about incentives), not empirical measurement.
Most EdTech vendors can be brought within the scope of federal financial assistance rules under three theories: (1) direct recipients (federal contracts/grants), (2) intended indirect recipients (intended beneficiaries of pass‑through federal funds), and (3) controllers of a federally funded program (firms exercising controlling authority).
Close reading of statutory language and administrative/judicial precedent applied to procurement and control relationships; doctrinal reasoning and illustrative examples (no empirical sampling).
Treating EdTech vendors as recipients would make the companies themselves directly liable for discrimination harms in schools.
Statutory interpretation of nondiscrimination obligations (Title VI/Title IX/Section 504) and precedent about recipient obligations; doctrinal reasoning and illustrative case law.
EdTech companies that provide tools like automated grading or plagiarism detection can — and should — be treated as “recipients” of federal financial assistance under existing federal education civil‑rights statutes.
Doctrinal legal analysis and policy argumentation drawing on statutory text, administrative guidance, and illustrative case law (no empirical dataset or sample size).
Research priorities for economists should include assembling integrated datasets (strain performance, TEA/LCA, patents/funding, compute/data assets) and building scenario TEA/LCA models under varying yield/productivity and regulatory assumptions.
Prescriptive recommendation based on identified gaps in the literature and the heterogeneity of existing case studies; justified by the review’s mapping of missing cross‑disciplinary datasets and methodological heterogeneity.
High‑throughput screening, microfluidics, and automated lab infrastructure materially increase the throughput of DBTL cycles and reduce time per iteration.
Aggregate experimental reports demonstrating use of droplet microfluidics, automated liquid-handling, and high-throughput assays enabling larger combinatorial libraries to be tested more rapidly in several published studies.
Integration of synthetic chemistry with engineered biology enables hybrid chemo‑bio manufacturing routes that can fill gaps where biological access alone is insufficient.
Examples in the review where biological steps produce advanced intermediates that are then completed by chemical steps (or vice versa), improving overall route efficiency or enabling transformations difficult for either domain alone.
Cell‑free synthetic platforms provide rapid prototyping and a decoupled route for bioproduction that can shorten design timelines.
Reports of cell-free pathway prototyping enabling quick testing of enzyme combinations, kinetics, and pathway flux before cellular implementation; experimental demonstrations at bench scale described in reviewed literature.
Machine learning and AI methods (sequence-to-function, phenotype prediction) significantly accelerate DBTL cycles and improve hit rates in strain optimization.
Cited studies using ML models to predict enzyme activity, rank pathway variants, and prioritize constructs for experimental testing; reported reductions in screening burden and improved selection of productive variants across several examples.
Biological production routes can achieve higher product specificity (e.g., for complex stereochemistry) than many traditional chemical syntheses for certain targets.
Case studies and examples where biosynthetic pathways produce stereochemically complex natural products and chiral intermediates that are difficult or multi‑step to access by classical chemistry; comparisons in the review between biosynthetic access and synthetic-chemistry challenges.
Experimental results on ICML and ACL 2025 abstracts produced coherent clusters that map to problem formulations, methodological contributions, and empirical contexts.
Reported experiments on ICML and ACL 2025 abstracts with qualitative analyses and cluster-coherence evaluations showing clusters aligning with problem types, methods, and empirical settings. (Exact counts/metrics not provided in summary.)
The framework treats an LLM as a fixed semantic inference operator guided by structured soft prompts to normalize abstracts into compact semantic representations that reduce stylistic variability while preserving conceptual content.
Described pipeline step: application of an LLM with structured soft prompts to transform raw abstracts into normalized semantic representations; qualitative claims about reduced stylistic noise and preserved core concepts (no quantitative metrics reported in summary).
Prompt-driven semantic normalization using large language models, combined with geometric (embedding + density-based clustering) analysis, provides a scalable, model-agnostic unsupervised framework that discovers coherent, human-interpretable research themes in large scientific corpora.
Method implemented and demonstrated on ICML and ACL 2025 abstracts using: (1) LLM-based semantic normalization with structured soft prompts; (2) embedding of normalized representations; (3) density-based clustering; evaluation via qualitative and cluster-coherence analyses. (Number of abstracts not specified in provided summary.)
Firms, regulators, and asset managers can operationalize complaint-topic and sentiment monitoring for early risk detection, prioritizing investigations, and as complementary features in forecasting or factor models.
Practical takeaway informed by empirical results showing complaint features predict short-term returns and topic-specific signals indicate reputational/operational risk; recommendations provided but no deployed field trial.
Including complaint-derived features in supervised machine-learning models improves out-of-sample prediction of abnormal returns relative to models using standard financial predictors alone.
Supervised learning experiments compare baseline financial-predictor models to augmented models that add complaint volume, topic prevalences (LDA), and aggregated VADER sentiment; augmented models show higher out-of-sample predictive accuracy for abnormal returns.
Relatively simple NLP tools (LDA for topics and VADER for sentiment) yield economically meaningful signals related to stock returns.
Pipeline: preprocessing + LDA topic extraction + VADER sentiment scoring on CFPB complaint narratives; resulting features show statistically significant associations with abnormal returns in panel models and improve ML predictive performance on the 261-firm monthly sample (2018–2023).
Topic-specific complaint trends (from LDA) provide additional predictive power for short-term abnormal returns beyond aggregate volume and sentiment.
Unsupervised LDA used to extract complaint topics at the firm–month level; inclusion of topic prevalence/trend variables in panel/ML models improves in-sample explanatory power and out-of-sample prediction accuracy relative to models using only volume and sentiment.
Findings are robust to standard model specifications and inclusion of macroeconomic controls.
Authors report robustness checks across alternative specifications and models that include controls (e.g., GDP per capita, trade openness, human capital, institutional quality) with consistent positive effects of the technology variables.
Complementarities: interaction effects among FinTech, AI readiness, and Blockchain activity are positive — simultaneous development/use of multiple technologies produces larger SDG gains than isolated adoption.
Panel regression models estimated with interaction terms (e.g., AI × FinTech, AI × Blockchain, three-way interactions) on G20 2015–2023 data; reported positive and statistically significant interaction coefficients implying supra-additive effects.
AI readiness exhibits the largest individual association with national SDG performance among the three technologies (FinTech, AI, Blockchain).
Comparison of estimated coefficients from the same panel regression framework (FinTech, AI, Blockchain included separately); AI coefficient reported as largest in magnitude and statistically significant.
National-level Blockchain activity positively and significantly predicts improved national SDG performance across G20 economies (2015–2023).
Cross-country panel regression with a blockchain activity indicator on G20 country-year data (2015–2023); reported statistically significant positive coefficient controlling for standard macro variables.
National AI readiness positively and significantly predicts improved national SDG performance across G20 economies (2015–2023).
Cross-country panel regressions using an AI readiness indicator on G20 country-year data (2015–2023); reported statistically significant positive association controlling for macro covariates.
National-level FinTech adoption positively and significantly predicts improved national Sustainable Development Goal (SDG) performance across G20 economies (2015–2023).
Cross-country panel regression analysis of G20 country-year data from 2015–2023; FinTech adoption indicator included as a main independent variable; models report statistically significant positive coefficient for FinTech after including macro controls.
Aid and infrastructure investment (digital public goods, AI capacity building) act as economic channels of influence that shape recipient countries' technological trajectories and participation in AI value chains.
Qualitative examples of development initiatives and technology transfer cited in the comparative case work and literature review; no new cross‑national statistical analysis provided.
AI technologies are core instruments of smart power, affecting productivity, industrial competitiveness, and the ability to project influence via platforms, surveillance systems, and information controls.
Theoretical argument supported by literature on AI's economic and strategic effects; no new quantitative dataset provided in the paper.
Both states and non‑state actors (tech firms, NGOs, international organisations) can exercise smart power; balance and instruments vary by polity and strategic aims.
Comparative qualitative evidence from the paper's four case studies and secondary empirical studies cited in the literature review; examples of tech firms and IOs in policy documents and public diplomacy cases.
Smart power transcends simple compulsion/attraction binaries by foregrounding legitimacy, cooperative security, and governance as central mechanisms for durable influence.
Theoretical model building and interpretive synthesis of IR literature and illustrative case material from the four case studies; qualitative argumentation rather than new empirical estimation.
In the digital era, states and non‑state actors operationalise smart power through three primary channels: diplomacy, development, and technology.
Comparative qualitative case studies of four actors (United States, China, European Union, Russia) plus synthesis of policy documents, public diplomacy examples, development initiatives, and technology behaviour drawn from the literature review.
Smart power integrates hard power (coercion) and soft power (attraction) into a single legitimacy‑based model of global influence.
Conceptual/theoretical analysis built from a systematic literature review of classical and contemporary IR and strategic studies; model development in the paper (no original quantitative data).
Transparent, auditable AI systems and governance mechanisms are necessary to maintain public trust and democratic oversight.
Normative and governance-focused argument in the book; supported by conceptual reasoning rather than empirical public-opinion or audit studies in the blurb.
Designing AI systems with participation and accessibility at their core is essential to prevent concentration of gains and widening inequalities.
Normative recommendation based on equity concerns and policy analysis; not empirically tested or quantified in the blurb.
AI platforms can materially improve efficiency and resilience of supply chains, altering comparative advantage and regional integration dynamics.
Illustrative vignette (logistics optimization) and policy-analytic reasoning; no empirical supply-chain studies or measured efficiency gains reported in the blurb.
Labor-market policy should emphasize reskilling, algorithmic job-matching, and social safety nets to account for rapid compositional changes enabled by AI platforms.
Policy recommendation grounded in scenario analysis and applied-AI descriptions; no empirical evaluation or quantified labor market impact provided in the blurb.
Policymakers need new institutional capacities to integrate AI-driven foresight into fiscal, trade, and labor policymaking.
Policy analysis and prescriptive argument in the book; illustrated with scenario reasoning but lacking empirical measurement of capacity gaps or interventions.
Rather than replacing human judgment, AI augments foresight and adaptation, enabling resilient, inclusive, and participatory governance if guided by deliberate policy design.
Normative and conceptual argumentation with illustrative vignettes (e.g., policymaker vignette); no empirical validation or sample sizes reported.
AI is transforming economic decision-making, governance, and value creation across sectors and countries.
Conceptual synthesis presented in the book/blurb; no empirical study or sample reported—claim supported by cross-sector examples and narrative argumentation.
A certification/audit industry is likely to emerge (market for algorithm auditors, explainability tools, compliance software).
Market-outcome inference in the economics implications section; forecast based on anticipated demand for compliance/audit services following white‑box mandates.
The protocol projects and systematizes 16 anticipated constitutional rulings by the SCJN to create enforceable standards.
Legal-methodological approach described in the compendium: explicit projection and systematization of 16 anticipated SCJN rulings to derive standards.