Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
Value will shift toward software, data infrastructure, and integration layers relative to hardware; microscopes may become platforms that generate ongoing subscription or model-related revenues.
Market-structure reasoning and analogies to platformization trends in other industries; no market-share or revenue data presented.
LLM-driven orchestration could lower the marginal cost and time per experiment by automating protocol design, instrument tuning, and analysis, thereby raising lab-level productivity.
Theoretical economic reasoning and analogy to automation benefits; no randomized trials or empirical throughput measurements provided.
LLMs can integrate contextual knowledge, experimental intent, and multi-step reasoning to coordinate sensors, actuators, and analysis tools.
Conceptual argument supported by literature on LLM context modeling and tool orchestration; some proof-of-concept integrations mentioned in related work but no systematic evaluation or sample sizes.
Potential applications of LLM orchestration in microscopy include conversational microscope control, adaptive experimental workflows, automated data-processing pipelines, and hypothesis generation/exploratory analysis.
Illustrative use cases and system-architecture proposals synthesized from related work and authors' analysis; these are proposed applications rather than empirically demonstrated at scale.
LLMs offer emergent capabilities in reasoning, abstraction, and tool coordination that make them natural interfaces between users and complex experimental systems.
Review of foundation-model literature demonstrating emergent reasoning and tool-use behaviors and conceptual arguments about fit with instrument orchestration; no experimental validation in microscopy contexts provided.
LLMs enable conversational control and multi-step workflow supervision that go beyond task-specific ML models.
Argument based on documented emergent LLM capabilities (reasoning, tool use) and illustrative prototypes from the literature; no controlled comparisons to task-specific ML models provided.
Large language models (LLMs) can serve as cognitive and orchestration layers for modern optical microscopy, bridging experiment design, instrument control, data analysis, and knowledge integration.
Conceptual synthesis and perspective drawing on recent literature about LLM capabilities, computational imaging, and illustrative proof-of-concept integrations reported in related work; no controlled experimental evaluation or quantitative sample size reported.
Research priorities for economists should include assembling integrated datasets (strain performance, TEA/LCA, patents/funding, compute/data assets) and building scenario TEA/LCA models under varying yield/productivity and regulatory assumptions.
Prescriptive recommendation based on identified gaps in the literature and the heterogeneity of existing case studies; justified by the review’s mapping of missing cross‑disciplinary datasets and methodological heterogeneity.
High‑throughput screening, microfluidics, and automated lab infrastructure materially increase the throughput of DBTL cycles and reduce time per iteration.
Aggregate experimental reports demonstrating use of droplet microfluidics, automated liquid-handling, and high-throughput assays enabling larger combinatorial libraries to be tested more rapidly in several published studies.
Integration of synthetic chemistry with engineered biology enables hybrid chemo‑bio manufacturing routes that can fill gaps where biological access alone is insufficient.
Examples in the review where biological steps produce advanced intermediates that are then completed by chemical steps (or vice versa), improving overall route efficiency or enabling transformations difficult for either domain alone.
Cell‑free synthetic platforms provide rapid prototyping and a decoupled route for bioproduction that can shorten design timelines.
Reports of cell-free pathway prototyping enabling quick testing of enzyme combinations, kinetics, and pathway flux before cellular implementation; experimental demonstrations at bench scale described in reviewed literature.
Machine learning and AI methods (sequence-to-function, phenotype prediction) significantly accelerate DBTL cycles and improve hit rates in strain optimization.
Cited studies using ML models to predict enzyme activity, rank pathway variants, and prioritize constructs for experimental testing; reported reductions in screening burden and improved selection of productive variants across several examples.
Biological production routes can achieve higher product specificity (e.g., for complex stereochemistry) than many traditional chemical syntheses for certain targets.
Case studies and examples where biosynthetic pathways produce stereochemically complex natural products and chiral intermediates that are difficult or multi‑step to access by classical chemistry; comparisons in the review between biosynthetic access and synthetic-chemistry challenges.
Experimental results on ICML and ACL 2025 abstracts produced coherent clusters that map to problem formulations, methodological contributions, and empirical contexts.
Reported experiments on ICML and ACL 2025 abstracts with qualitative analyses and cluster-coherence evaluations showing clusters aligning with problem types, methods, and empirical settings. (Exact counts/metrics not provided in summary.)
The framework treats an LLM as a fixed semantic inference operator guided by structured soft prompts to normalize abstracts into compact semantic representations that reduce stylistic variability while preserving conceptual content.
Described pipeline step: application of an LLM with structured soft prompts to transform raw abstracts into normalized semantic representations; qualitative claims about reduced stylistic noise and preserved core concepts (no quantitative metrics reported in summary).
Prompt-driven semantic normalization using large language models, combined with geometric (embedding + density-based clustering) analysis, provides a scalable, model-agnostic unsupervised framework that discovers coherent, human-interpretable research themes in large scientific corpora.
Method implemented and demonstrated on ICML and ACL 2025 abstracts using: (1) LLM-based semantic normalization with structured soft prompts; (2) embedding of normalized representations; (3) density-based clustering; evaluation via qualitative and cluster-coherence analyses. (Number of abstracts not specified in provided summary.)
Practical outputs include open-source tooling (Neural MRI), standardized reporting formats (M-CARE), and clinical-style indices for behavioral profiling released alongside the paper.
Authors report open-source toolkit and standardized instruments in the paper (implementation and release claimed).
Combined imaging (Neural MRI) and profiling can localize dysfunctions in models and support predictive claims about future model behavior, as shown in the case-based demonstrations.
Four clinical case studies plus analyses within the Agora-12 experimental domain demonstrating localization and predictive uses of imaging + profiling.
A behavioral genetics approach decomposes variance in agent behavior into heritable (Core) versus environmental and Shell-level influences, formalized in the Four Shell Model.
Analytical method described and applied to the Agora-12 dataset (variance-decomposition analyses analogous to behavioral genetics).
Neural MRI was validated on four clinical case studies that showcase imaging, comparison, localization, and prediction capabilities.
Case-based demonstrations reported in the paper (n = 4 clinical cases used to validate the toolkit and diagnostic pipeline).
The Four Shell Model (v3.3) explains model behavior as emergent from interactions between a Core and multiple Shell layers.
Theoretical formalization (behavioral-genetics-style framework) plus empirical grounding using analyses from the Agora-12 program (see supporting experiments).
On the supply side, digital platforms reduced intermediaries and enabled direct, flexible gigs, increasing platform-mediated cultural work.
Evidence from inferred measures of platform-mediated activity and interaction effects between digital infrastructure indicators and treatment status on employment outcomes in the DID models (280 cities, 2008–2021).
On the demand side, combined government funding and digital channels boosted cultural consumption, increasing labor demand.
Analysis of government funding/procurement measures and digital channel proxies interacting with employment outcomes in the city-level panel; DID identification with fixed effects across 280 cities (2008–2021).
Fiscal-Digital Synergy: government funding combined with digital platforms amplified cultural demand and disintermediated supply, driving employment effects.
Mechanism tests linking fiscal transfers/procurement variables and measures of digital infrastructure/usage to employment outcomes within the DID framework; interaction/heterogeneity analyses showing larger effects where digital infrastructure and procurement intensity are higher (280 cities, 2008–2021).
Growth manifested through flexible, platform-enabled labor and government-procured gigs rather than firm-based expansion (termed 'De-organized Growth').
Inferred platform-mediated work activity and analysis of government procurement patterns in the city-panel data; mechanism tests linking increases in government funding/procurement and proxies for platform-mediated activity to cultural employment gains (2008–2021, 280 cities).
Firms, regulators, and asset managers can operationalize complaint-topic and sentiment monitoring for early risk detection, prioritizing investigations, and as complementary features in forecasting or factor models.
Practical takeaway informed by empirical results showing complaint features predict short-term returns and topic-specific signals indicate reputational/operational risk; recommendations provided but no deployed field trial.
Including complaint-derived features in supervised machine-learning models improves out-of-sample prediction of abnormal returns relative to models using standard financial predictors alone.
Supervised learning experiments compare baseline financial-predictor models to augmented models that add complaint volume, topic prevalences (LDA), and aggregated VADER sentiment; augmented models show higher out-of-sample predictive accuracy for abnormal returns.
Relatively simple NLP tools (LDA for topics and VADER for sentiment) yield economically meaningful signals related to stock returns.
Pipeline: preprocessing + LDA topic extraction + VADER sentiment scoring on CFPB complaint narratives; resulting features show statistically significant associations with abnormal returns in panel models and improve ML predictive performance on the 261-firm monthly sample (2018–2023).
Topic-specific complaint trends (from LDA) provide additional predictive power for short-term abnormal returns beyond aggregate volume and sentiment.
Unsupervised LDA used to extract complaint topics at the firm–month level; inclusion of topic prevalence/trend variables in panel/ML models improves in-sample explanatory power and out-of-sample prediction accuracy relative to models using only volume and sentiment.
Findings are robust to standard model specifications and inclusion of macroeconomic controls.
Authors report robustness checks across alternative specifications and models that include controls (e.g., GDP per capita, trade openness, human capital, institutional quality) with consistent positive effects of the technology variables.
Complementarities: interaction effects among FinTech, AI readiness, and Blockchain activity are positive — simultaneous development/use of multiple technologies produces larger SDG gains than isolated adoption.
Panel regression models estimated with interaction terms (e.g., AI × FinTech, AI × Blockchain, three-way interactions) on G20 2015–2023 data; reported positive and statistically significant interaction coefficients implying supra-additive effects.
AI readiness exhibits the largest individual association with national SDG performance among the three technologies (FinTech, AI, Blockchain).
Comparison of estimated coefficients from the same panel regression framework (FinTech, AI, Blockchain included separately); AI coefficient reported as largest in magnitude and statistically significant.
National-level Blockchain activity positively and significantly predicts improved national SDG performance across G20 economies (2015–2023).
Cross-country panel regression with a blockchain activity indicator on G20 country-year data (2015–2023); reported statistically significant positive coefficient controlling for standard macro variables.
National AI readiness positively and significantly predicts improved national SDG performance across G20 economies (2015–2023).
Cross-country panel regressions using an AI readiness indicator on G20 country-year data (2015–2023); reported statistically significant positive association controlling for macro covariates.
National-level FinTech adoption positively and significantly predicts improved national Sustainable Development Goal (SDG) performance across G20 economies (2015–2023).
Cross-country panel regression analysis of G20 country-year data from 2015–2023; FinTech adoption indicator included as a main independent variable; models report statistically significant positive coefficient for FinTech after including macro controls.
The observed score improvement of 0.27 grade points corresponds roughly to one-third of a letter grade.
Reported effect size (0.27 grade points) and author interpretation equating that magnitude to approximately one-third of a letter grade.
Aid and infrastructure investment (digital public goods, AI capacity building) act as economic channels of influence that shape recipient countries' technological trajectories and participation in AI value chains.
Qualitative examples of development initiatives and technology transfer cited in the comparative case work and literature review; no new cross‑national statistical analysis provided.
AI technologies are core instruments of smart power, affecting productivity, industrial competitiveness, and the ability to project influence via platforms, surveillance systems, and information controls.
Theoretical argument supported by literature on AI's economic and strategic effects; no new quantitative dataset provided in the paper.
Both states and non‑state actors (tech firms, NGOs, international organisations) can exercise smart power; balance and instruments vary by polity and strategic aims.
Comparative qualitative evidence from the paper's four case studies and secondary empirical studies cited in the literature review; examples of tech firms and IOs in policy documents and public diplomacy cases.
Smart power transcends simple compulsion/attraction binaries by foregrounding legitimacy, cooperative security, and governance as central mechanisms for durable influence.
Theoretical model building and interpretive synthesis of IR literature and illustrative case material from the four case studies; qualitative argumentation rather than new empirical estimation.
In the digital era, states and non‑state actors operationalise smart power through three primary channels: diplomacy, development, and technology.
Comparative qualitative case studies of four actors (United States, China, European Union, Russia) plus synthesis of policy documents, public diplomacy examples, development initiatives, and technology behaviour drawn from the literature review.
Smart power integrates hard power (coercion) and soft power (attraction) into a single legitimacy‑based model of global influence.
Conceptual/theoretical analysis built from a systematic literature review of classical and contemporary IR and strategic studies; model development in the paper (no original quantitative data).
Transparent, auditable AI systems and governance mechanisms are necessary to maintain public trust and democratic oversight.
Normative and governance-focused argument in the book; supported by conceptual reasoning rather than empirical public-opinion or audit studies in the blurb.
Designing AI systems with participation and accessibility at their core is essential to prevent concentration of gains and widening inequalities.
Normative recommendation based on equity concerns and policy analysis; not empirically tested or quantified in the blurb.
AI platforms can materially improve efficiency and resilience of supply chains, altering comparative advantage and regional integration dynamics.
Illustrative vignette (logistics optimization) and policy-analytic reasoning; no empirical supply-chain studies or measured efficiency gains reported in the blurb.
Labor-market policy should emphasize reskilling, algorithmic job-matching, and social safety nets to account for rapid compositional changes enabled by AI platforms.
Policy recommendation grounded in scenario analysis and applied-AI descriptions; no empirical evaluation or quantified labor market impact provided in the blurb.
Policymakers need new institutional capacities to integrate AI-driven foresight into fiscal, trade, and labor policymaking.
Policy analysis and prescriptive argument in the book; illustrated with scenario reasoning but lacking empirical measurement of capacity gaps or interventions.
Rather than replacing human judgment, AI augments foresight and adaptation, enabling resilient, inclusive, and participatory governance if guided by deliberate policy design.
Normative and conceptual argumentation with illustrative vignettes (e.g., policymaker vignette); no empirical validation or sample sizes reported.
AI is transforming economic decision-making, governance, and value creation across sectors and countries.
Conceptual synthesis presented in the book/blurb; no empirical study or sample reported—claim supported by cross-sector examples and narrative argumentation.
Policy interventions—investments in digital infrastructure, vocational and continuing education, and incentives for firm-level training—amplify AI benefits, particularly in lower-income countries.
Policy-relevant heterogeneous treatment effects and simulated counterfactuals showing larger productivity gains in contexts with better infrastructure and training; empirical interaction terms between policy proxies and adoption effects.