Evidence (2432 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Labor Markets
Remove filter
Researchers construct AI exposure indices at the task level to indicate susceptibility to AI automation or augmentation.
Cited examples (Felten et al., 2023; Eloundou et al., 2023) that develop task-level scores; evidence basis is methodological papers that publish indices and mapping procedures (often using O*NET tasks, expert labeling, or model-based scoring).
Commonly used data sources for measuring AI exposure include job postings and descriptions, occupational task databases (O*NET-style), employer/household surveys, administrative payroll data, and firm-level productivity measures.
List of data sources compiled in the paper; evidence is a methodological summary of datasets used across the cited literature rather than novel data collection.
Many studies rely on static assumptions (fixed comparative advantage, no adaptation) and theoretical models, which limits causal inference and makes projections model-dependent.
Methodological critique cited in the paper (e.g., critique of Acemoglu & Restrepo, 2022; Webb, 2020) and the paper's survey of common modeling choices (static equilibrium or representative-agent models); evidence basis is theoretical critique and literature review rather than new causal estimates.
Task-level approaches capture within-occupation heterogeneity in automation and augmentation risk that occupation-level analyses miss.
Empirical and methodological work cited (Felten et al., 2023; Eloundou et al., 2023) that construct task-level exposure indices and show variation across tasks within the same occupation; evidence based on task mappings from O*NET-style databases and job descriptions.
Recent research in AI–labor economics has shifted from occupation-level analysis to task-level analysis, mapping task-by-task exposure to AI.
Synthesis of recent literature cited in the paper (e.g., Felten et al., 2023; Eloundou et al., 2023) which develop task-level exposure mappings using occupational task databases (O*NET-style) and job-posting text; evidence is bibliographic and methodological rather than a single new empirical dataset.
The paper proposes measurable metrics such as projection congruence indices, alignment persistence measures, monitoring/oversight burden, and outcome variability/tail risks attributable to agentic autonomy.
Explicit metric proposals in the methods and metrics section of the paper; presented as part of a research agenda rather than empirically implemented.
The paper proposes specific empirical and analytic follow-ups — multi-agent simulations, lab experiments with humans and adaptive agents, field case studies, econometric analyses, and formal economic models — to test the conceptual claims.
Explicit methods and research agenda listed in the paper; these are recommended future methods, not evidence.
Agentic AI is characterized by three properties that drive structural uncertainty: open-ended action trajectories, generative representations/outputs, and evolving objectives.
Definitions and taxonomy developed in the paper based on conceptual synthesis; presented as framing rather than empirically measured properties.
Another important gap is quantifying complementarities between AI and different skill types (evaluative vs. generative tasks).
Review observation that existing empirical work has not systematically quantified how AI productivity gains vary with worker skill composition and complementary roles.
Key research gaps include a lack of long-run causal evidence on the effects of LLMs on firm-level innovation rates, business formation, and industry structure.
Explicit identification of gaps in the literature within the nano-review; the review states that most studies are short-term, task-level, or descriptive.
High-priority research includes randomized controlled trials on hybrid vs. automated routing, long-run studies on labor markets in service sectors, and models quantifying trust externalities and governance costs.
Paper's stated research agenda based on identified evidence gaps and limitations (lack of randomized long-run studies).
Current evidence is promising but early: case studies, pilot deployments, and short-run experiments dominate; long-run causal evidence on labor and welfare effects is limited.
Explicit methodological assessment in the paper noting source types (deployments, pilots, vendor reports, short-run experiments) and limitations (heterogeneity, lack of randomized controls, short horizons).
Measurement and research gaps (data scarcity, informality) complicate robust economic assessment of AI impacts; improved metrics, granular labour and firm‑level data, and mixed‑methods evaluation are required.
Methodological critique based on reviewed literature and identified gaps; no new data collection in the paper.
Recommended research designs to estimate impacts include RCTs, quasi-experimental methods (difference-in-differences, regression discontinuity, matching), and longitudinal cohort tracking.
Paper explicitly lists these evaluation designs as appropriate methods for causal inference and long-term outcomes measurement. This is a methodological recommendation rather than an empirical claim.
There is a need for empirical research to quantify net economic impact (productivity gains vs governance costs), effects on employment composition and wages, and market outcomes from alternative governance architectures.
Explicit research gaps listed in the paper; recommendation for future empirical strategies (difference-in-differences, event studies, randomized pilots, instrumental variables) and suggested data sources.
The article’s evidence is predominantly practitioner-driven and illustrative, relying on qualitative case evidence rather than systematic quantitative causal estimates.
Explicit statement in the paper’s Data & Methods section describing nature of evidence and limitations; methods listed include synthesis, comparative analysis, illustrative architectures, and anecdotal cases.
Key technical components of the pattern include low-code platforms for rapid governed app development, RPA for deterministic process automation and legacy integration, and generative AI for document understanding, conversational interfaces, and decision support — with guardrails.
Paper’s component list and rationale based on practitioner experience and multi-sector examples; presented as recommended components in the reference architecture; no experimental validation of component selection given.
The proposed layered deployment pattern integrates organizational governance (roles, policies, decision rights), technical architecture (platforms, APIs, data flows), and AI risk management (controls, monitoring, human-in-the-loop).
Design and architectural proposal within the paper; described via illustrative deployment patterns and reference architectures. This is a descriptive claim about the proposed pattern rather than an empirical effect.
Recommended next steps for validation include controlled pilots, before-after studies on operational metrics, and cross-firm panel analyses to estimate economic impacts and risk reductions.
Authors' explicit recommendations for empirical validation in the Data & Methods and Implications sections.
There is no reported large-scale quantitative evaluation (e.g., productivity gains, cost-benefit metrics, or causal impact estimates) supporting the framework in the paper.
Explicit limitation noted by the authors stating absence of large-scale quantitative evaluation.
The evidence base for the paper is qualitative: a synthesis of industry best practices and lessons from multi-sector enterprise implementations; methods used include conceptual framework development, architecture design, and case-based illustration.
Explicit methodological statement in the Data & Methods section of the paper.
The article is largely qualitative and prescriptive rather than empirical; it does not provide systematic incidence estimates or large-scale measured losses from prompt fraud and identifies empirical validation as needed.
Authors' stated methods and limitations: conceptual analysis, threat modeling, literature review, illustrative vignettes; explicit note of absent systematic empirical data.
SECaaS offerings commonly include threat intelligence, managed detection & response (MDR), endpoint protection, IAM, CASB, security orchestration/automation, and compliance-as-a-service.
Survey of SECaaS product categories in industry reports and vendor catalogs; technical benchmarks describing typical feature sets.
Achieving CIA in the cloud requires technical controls (encryption, access controls, IAM, MFA, zero-trust), resilience measures (backups, redundancy, DR/BCP), and continuous monitoring (logging, SIEM, EDR/XDR).
Synthesis of technical best practices and vendor/industry guidance; supported by technical evaluations and case studies in the literature.
Core cloud security goals remain confidentiality, integrity, and availability (CIA).
Canonical security literature and standards cited in the chapter; general consensus across technical controls and industry best-practice frameworks (e.g., NIST, ISO).
The authors recommend empirical approaches for future work including randomized controlled trials in labs, before-after adoption studies, and collection of microdata on instrument usage, model versions, and provenance to measure impacts.
Explicit methodological recommendations in the Measurement and empirical research agenda section; these are proposals rather than executed studies.
There is a need for rigorous evaluation metrics and benchmarks for safety, reproducibility, and empirical studies quantifying productivity or scientific impact of LLM-driven instrument control.
Identified research gaps and recommended empirical research agenda described by the authors; these are recommendations rather than empirical findings.
The evidence presented consists mainly of qualitative arguments drawn from documented advances and discussion of prototypes; no controlled experimental evaluation is presented.
Authors' own description in the Data & Methods section about the nature of evidence supporting their perspective.
This paper is a conceptual perspective/review rather than an original empirical study.
Explicit statement in the Data & Methods section that the contribution is a perspective synthesizing literature and illustrative examples with no controlled experimental evaluation.
Modern microscopes are increasingly software-driven and data-intensive, while existing ML tools for microscopy are task-specific and fragmented.
Synthesis of recent literature on optical microscopes, detectors, and task-specific ML for image analysis referenced in the perspective (descriptive claim; no new empirical data collected).
Techno‑economic assessments (TEA) and life‑cycle analyses (LCA) are necessary research tools to compare bio‑routes to incumbent chemical synthesis on cost and emissions, and current literature is incomplete in this regard.
Review notes the presence of some TEA/LCA studies but highlights gaps and heterogeneity in methods and results across case studies; many processes lack published TEA/LCA at commercial scales.
Robustness checks include city and year fixed effects and heterogeneous-effect examinations by digital infrastructure level.
Reported robustness analyses in the paper: models controlling for city and time fixed effects and tests of heterogeneity by digital infrastructure purported to support the main findings (sample: 280 cities, 2008–2021).
The study's identification strategy treats the Demonstration Zone designation as a quasi-natural experiment using a staggered, multi-period DID across 280 prefecture-level cities (2008–2021).
Stated research design: multi-period difference-in-differences exploiting variation in timing of designation; sample comprises 280 prefecture-level cities over 2008–2021; results include city and time fixed effects.
The employment increase occurred without a corresponding increase in counts of formal cultural enterprises.
Secondary outcome analysis in the same DID framework on formal enterprise counts in the cultural sector using the 280-city panel (2008–2021); reported null effect on number of formal cultural enterprises.
Findings are estimated for Chinese cities and require replication in other institutional contexts to assess external validity.
Scope statement in the paper — primary empirical sample limited to 274 Chinese cities; authors note generalizability limits and call for replication elsewhere.
The paper’s AI exposure index — capturing automation and service-sector transformation — is important for robust measurement in empirical work on AI’s macro and environmental effects.
Methodological claim justified by the paper's construction of the index and its use in the main and robustness regressions; robustness checks reported using alternative index specifications.
The paper constructs an AI exposure index that captures both industrial automation (robots) and AI-enabled transformation of service-sector jobs/tasks.
Methodological construction described in the paper combining measures of industrial robot adoption (sectoral push) and AI-driven changes in service-sector job/task content.
The study uses a panel of 274 Chinese cities from 2007–2021 as the primary empirical sample.
Descriptive dataset information reported in the paper — city-level panel covering 274 cities and the years 2007 through 2021.
The paper's empirical approach is primarily qualitative and interpretive: a systematic literature review plus comparative qualitative case studies, using policy documents, public diplomacy examples, development initiatives, technology export and standards behaviour, and secondary empirical studies as evidence.
Methods section of the paper explicitly states the approach and evidence types; sample of four comparative cases (US, China, EU, Russia) is specified.
The paper demonstrates different mixes and institutional practices of smart power in practice by applying the framework to the United States, China, the European Union, and Russia.
Explicit comparative qualitative case studies of four major international actors (sample size: four cases) using policy documents, public diplomacy examples, and development/technology initiatives as illustrative evidence.
Empirical validation of the book’s proposals would require complementary case studies, model documentation, and outcome measurements.
Author/reviewer recommendation in the blurb about methodological limitations and next steps; not an empirical finding.
The book is predominantly conceptual and policy-analytic and uses illustrative case vignettes rather than presenting a single empirical study.
Explicit methodological description in the Data & Methods blurb: synthesis of technical ideas, governance requirements, and illustrative vignettes; no empirical sample or experimental protocol described.
The evidence base is qualitative: the study uses conceptual framework synthesis, comparative analysis of multi-sector implementations, and case examples rather than randomized or large-sample empirical evaluation.
Methods and limitations section of the paper explicitly describing the evidence base and methods (qualitative synthesis, pattern extraction, cross-case lessons).
The paper presents a deployment pattern intended to be adapted by sector and regulatory context rather than a one-size-fits-all blueprint.
Explicit statement in the paper and the described pattern design; based on qualitative pattern extraction and prescriptive guidance.
Methodological claim: combining fixed-effects panel estimation, mediation analysis, and panel threshold models is an effective multi-method approach to (a) estimate average effects, (b) unpack causal channels, and (c) detect nonlinear stage-dependent impacts.
The paper's applied methodology: fixed-effects panel regressions, mediation framework, and panel threshold modeling on the 2012–2022 provincial panel.
The paper constructs a multidimensional digitalization index composed of digital infrastructure, digital service capacity, and the digital development environment.
Index construction described in data/methods: composite indicator combining measures of connectivity/broadband (infrastructure), e-commerce/digital finance (service capacity), and policy/institutional/human capital indicators (development environment).
The study is observational (panel) and subject to limitations: residual confounding is possible; two-way fixed-effects estimators can be biased with heterogeneous treatment timing or dynamics; external validity beyond China and non-grain crops is not established.
Authors' stated limitations and caveats in the paper regarding identification and generalizability of results from the CLDS 2014–2018 observational panel.
The study uses two-way fixed-effects (household and year) models as the primary identification strategy and employs propensity score matching (PSM) as a robustness check.
Methods section of the paper describing estimation strategy applied to the CLDS 2014–2018 panel of grain-producing households.
The regional average minimum cost of salaried labor (MCSL) was 43.1% of GDP per worker in 2023.
Computed for the same 19-country sample (baseline 2023) using country statutory employer obligations and reporting MCSL relative to GDP per worker following the updated IDB approach.
The regional average non-wage cost of salaried labor (NWC) in Latin America and the Caribbean was 51.1% of formal wages in 2023.
Calculated for a sample of 19 Latin American and Caribbean countries for baseline year 2023 by compiling country-specific statutory employer obligations (payroll taxes, social contributions, mandated benefits, severance, etc.) and expressing employer non-wage costs relative to formal wages using the updated IDB methodology.