Evidence (4137 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Governance
Remove filter
The study analyzes panel data covering Chinese A-share listed companies from 2007 to 2021.
Description of dataset in the paper: panel of Chinese A-share listed companies spanning the years 2007–2021 (sample period stated).
The analysis extends the dynamic taxation setup of Slavik and Yazici (2014).
Methodological claim: the model and solution approach build on and modify the framework from Slavik and Yazici (2014) (reference to prior theoretical framework rather than empirical data).
We characterize the optimal tax policy in an economy with human manual and cognitive labor, physical capital, and artificial intelligence (AI).
Theoretical/analytical work: the paper develops and analyzes a dynamic general-equilibrium model that includes manual and cognitive human labor, physical capital, and AI. (No empirical sample; model-based characterization.)
The field study used a 44-item questionnaire with 45 participants to measure comprehension, reported behavior change/adoption, and perceptions of volunteer legitimacy.
Methodological description provided in the paper: instrument and sample sizes explicitly reported.
No original quantitative dataset or controlled evaluation is reported in this paper.
Methodological description in the paper stating reliance on prior literature, conceptual analysis, and prescriptive recommendations; paper does not present new experiments.
The paper is a position/normative paper (not an empirical study) that uses conceptual analysis, literature synthesis, and prescriptive roadmaping rather than new quantitative experiments or datasets.
Explicit methodological statement in the paper summarizing genre and methods used; absence of reported original data or controlled evaluations.
There is a need for longitudinal and cross‑country empirical research to measure how hybrid work and AI tools affect promotion rates, network centrality, productivity, privacy harms, trust, and long‑term career trajectories.
Statement of research gaps derived from the paper's methodological approach (conceptual synthesis and secondary case studies) and absence of longitudinal/cross‑cultural primary data.
Robustness checks include mediator tests (costs, tariffs, logistics) and firm‑level subgroup analyses to establish heterogeneous responses and support mechanism claims.
Paper reports robustness strategy involving mediation analysis and subgroup DID estimations across multiple mediator variables and firm size groups using the stated databases.
Empirical identification relies on treating CAFTA as an exogenous shock and applying a difference‑in‑differences (DID) design on firm and customs data from 2000–2014.
Methodological description in the paper: DID strategy with treated vs control comparisons; data sources explicitly listed as the China Industrial Enterprise Database and China Customs Database covering 2000–2014.
Highly Autonomous Cyber-Capable Agents (HACCAs) are AI systems able to plan and execute multi-stage cyber campaigns across the full attack lifecycle with minimal or no human direction.
Conceptual definition provided in the report; constructed via literature review and threat-framework formulation (no empirical sample; definitional/analytic).
Practical recommendations for firms and policymakers include investing in training for AI curation/evaluation/coordination, experimenting with decentralised decision rights and governance safeguards, and monitoring competitive dynamics related to model/platform providers.
Policy and practitioner takeaways explicitly presented in the discussion/implications sections, deriving from the conceptual framework and mapped literature.
The paper recommends a research agenda for AI economists: causal microeconometric studies (DiD, IVs, RCTs), structural models with hybrid human–AI agents, measurement work on GenAI use, distributional analysis and policy evaluation.
Explicit recommendations listed in the implications and research agenda sections; logical follow‑on from bibliometric findings about gaps in causal and measurement evidence.
Bibliometric mapping profiles the intellectual structure and evolution of the field but does not establish causal effects of GenAI on organisational outcomes.
Methodological limitation explicitly stated in the paper; bibliometric approach (co‑word, citation, thematic mapping) is descriptive and historical in scope.
Co‑word and thematic analyses reveal six coherent conceptual clusters that bridge technical AI topics (e.g., LLMs, GANs) with managerial themes (e.g., autonomy, coordination, decision‑making).
Thematic mapping and co‑word network analysis performed on the 212‑paper corpus; identification of six clusters reported in results.
Bibliometric and conceptual tools (VOSviewer, Bibliometrix) were used to identify performance trends, co‑word structures, thematic maps, and conceptual evolution in the GenAI–organisation literature.
Methods section: use of VOSviewer for network visualization and Bibliometrix for bibliometric statistics, co‑word analysis, thematic mapping and Sankey thematic evolution.
The study analysed a corpus of 212 Scopus‑indexed publications covering 2018–2025 to map emergent literature on Generative AI and organisational change.
Bibliometric dataset constructed from Scopus; sample size = 212 peer‑reviewed articles; time window 2018–2025; analyses performed with Bibliometrix and VOSviewer.
Research agenda: causal studies (panel data, quasi-experiments) are needed to estimate effects of AI exposure on employment outcomes and to evaluate retraining/income-support interventions for pre-retirement populations.
Authors’ stated recommendation based on limits of cross-sectional regression results from the n=889 survey and the identified need to move from association to causation.
Study limitations: cross-sectional design, self-reported intentions, potential unobserved confounders, and limited generalizability to only three cities (Beijing, Guangzhou, Lanzhou).
Explicit methodological statements in the paper describing data and design: cross-sectional survey of 889 respondents from three cities and reliance on self-reported employment intentions.
The paper identifies future research directions, including empirical causal studies on how DPP+AI interventions change recycling rates, second‑hand market prices, and firm investment in circular processes; and modeling firm strategy around proprietary vs shared DPP data.
Stated research agenda and gaps in the paper informed by the study's findings and limitations; these are recommendations rather than empirical claims.
The study used a mixed-methods design focused on the Italian fashion and cosmetics industries, employing two online surveys, k‑means clustering (consumer segmentation), principal component analysis (to identify underlying dimensions of DPP functionalities and sustainability practices), and logistic regression (to identify adoption drivers).
Methods section summary provided in the paper; explicit statement of methods and industry context. Note: sample sizes and survey instrument details are not provided in the summary.
Two consumer segments were identified: 'aware' consumers (environmentally attuned and receptive to digital innovation and sustainability information) and 'unaware' consumers (prioritize immediate, tangible benefits like price and convenience over sustainability information).
K‑means cluster analysis applied to consumer responses from one of the online surveys in the Italian fashion and cosmetics context; summary identifies two clusters; sample sizes not reported.
This work is a conceptual/policy analysis rather than an original empirical study.
Explicit statement in the paper's Data & Methods section.
Study limitations include single-country (China) listed‑firm sample and reliance on secondary/administrative proxies for digitalization and innovation, which may miss internal qualitative aspects and introduce measurement error.
Authors’ stated limitations: sample restricted to Chinese A-share listed firms (2012–2022) and measures of digitalization/innovation derived from administrative/secondary data rather than direct observation/survey of internal practices.
No new primary empirical tests were performed in this paper; conclusions are based on secondary analysis and are broad and diagnostic rather than demonstrating causal mechanisms.
Explicit methodological statement in the Data & Methods and Limitations sections of the paper describing it as a qualitative literature review and synthesis.
Research should prioritize causal identification (IV, difference‑in‑differences, regression discontinuity) to disentangle whether ESG causes better financial outcomes or instead proxies for unobserved firm quality.
Methodological recommendation based on limitations in the reviewed literature (many observational/correlational studies); the paper argues for stronger causal designs going forward.
The authors propose research priorities for economists: quantify productivity gains from closing the actionability gap; estimate firm-level heterogeneity in evaluation capability and its effect on adoption; and model investment trade-offs between building evaluation-to-action pipelines versus accepting reduced LLM performance.
Paper's concluding recommendations for future research directions (explicitly listed by the authors).
The paper produces as primary outcomes a taxonomy of ten evaluation practices, the articulation of the results-actionability gap, and recommended strategies observed among successful teams.
Authors report these as the main outcomes of their thematic analysis and syntheses from the 19 interviews.
The study method consisted of semi-structured qualitative interviews with 19 practitioners across multiple industries and roles, analyzed via thematic coding.
Explicit methods section of the paper stating sample size (n=19), participant diversity, interview approach, and coding/analysis procedure.
AI-economics research should treat quantum capability as a distinct, gradually diffusing factor of production with sectoral specificity and model complementarities and policy counterfactuals endogenously.
Modeling recommendations grounded in sensitivity of macro outcomes to diffusion patterns, complementarities, and policy choices observed in the scenario and counterfactual analyses.
Model parameters are calibrated using historical diffusion of enabling technologies (cloud computing, GPUs, AI toolchains), industry case studies, and expert elicitation where hard data are lacking.
Empirical grounding section describing calibration sources: historical diffusion, case studies (materials discovery, optimization), and expert elicitation.
Uncertainty quantification is performed by running Monte Carlo or scenario ensembles and conducting sensitivity and robustness checks.
Methodological claim in the uncertainty quantification section describing Monte Carlo/scenario ensemble approach.
Sectoral TFP shocks are integrated into computational general equilibrium (CGE) or multi-sector growth models (and optionally DSGE variants) to simulate GDP, sector output, trade impacts, and labor reallocation.
Method section stating integration of sectoral TFP shocks into CGE/multi-sector growth models with optional DSGE short-run dynamics.
Sectoral adoption is translated into total factor productivity (TFP) shocks or sector-specific Hicks-neutral productivity improvements based on micro evidence of quantum advantages.
Methodological description of productivity mapping linking adoption to TFP shocks using micro evidence and case studies.
The paper uses empirical diffusion functions (logistic/S-curve, Bass model) calibrated to analogous technologies to project uptake over time.
Methodological description: diffusion modeling section explicitly states use of logistic/S-curve and Bass models and calibration to past technologies (cloud, GPUs).
The analysis used sentence‑transformer models to produce dense vector representations of article text and UMAP to project those embeddings into a low‑dimensional thematic map for cluster identification and gap detection.
Methods section specifying use of sentence‑transformer embeddings and UMAP for dimensionality reduction/visualization of article text.
The study followed a PRISMA protocol for literature selection and included peer‑reviewed journal articles published between 2014 and 2024, with a final sample size of n = 109.
Explicit methodological statement in the paper describing the literature search, inclusion/exclusion criteria, and final sample.
Twenty‑seven papers study marketing in banking without using NLP methods.
PRISMA systematic review; categorization of the 109 selected articles into the three coverage groups (8, 74, 27).
Seventy‑four papers study NLP in marketing more broadly (not specifically banking).
Same PRISMA‑based systematic review and manual categorization of the final sample n = 109 into topical buckets (NLP in marketing vs. NLP in bank marketing vs. marketing in banking without NLP).
Only 8 peer‑reviewed papers directly examine NLP in bank marketing (out of a final sample of 109 articles published 2014–2024).
Systematic review following PRISMA protocol; final sample n = 109 peer‑reviewed journal articles published 2014–2024; manual screening and categorization yielding counts by topic.
The study's findings are qualitative and case-driven (Xiaomi and Deloitte); generalizability is limited by case selection and the absence of standardized quantitative metrics.
Methods section explicitly states case analysis and literature review as primary methods and notes lack of large-scale quantitative measurement.
The methodology is normative-philosophical argumentation supplemented by interdisciplinary synthesis (phenomenology, deconstruction, OOO, STS/material turn); this is not an empirical causal study and contains no quantitative datasets.
Author-declared methods and limits: statement that the intervention is theory-driven and qualitative; absence of quantitative analysis reported.
The paper’s empirical grounding consists of illustrative case studies and vignettes from healthcare robotics, autonomous vehicles, and algorithmic governance used to demonstrate distributed agency and responsibility.
Author-stated methodology: qualitative vignettes/case illustrations across three domains; no reported sample sizes or systematic data collection.
The analysis in the paper is primarily qualitative and descriptive; it does not empirically quantify AI’s effects on trade flows or welfare.
Explicit statement in the methods/data description noting a mixed qualitative approach (theoretical analysis, comparative legal analysis, case studies, scenario reasoning) and absence of empirical quantification.
The study is qualitative and law-focused and uses Vietnam as a focused case study without collecting primary quantitative field data.
Explicit Data & Methods statement in the paper indicating doctrinal legal analysis, comparative institutional analysis, and normative framework development; no primary quantitative sample.
The study recommends empirical metrics for future evaluation of reforms, including processing time per case, reversal rates on appeal, administrative litigation frequency, compliance and procurement costs, investment flows into public-sector AI, and changes in labor composition and wages in administrative agencies.
Methodological recommendation arising from the paper's normative and comparative analysis.
Analysis compared responses across 16 predefined dimension pairs (ethical dimensions or response axes) and used repeated measures and qualitative coding to characterize system behavior.
Methods and Analysis sections reporting use of 16 dimension-pair comparisons, repeated-measures tests for delta between blind and declared administrations, and qualitative coding to derive D3 failure taxonomy.
Probe administration included operational controls: runs were administered by two human raters across three machines to ensure operational consistency.
Methods statement describing administration by two human raters on three machines.
The ceiling discrimination probe used Gemini Pro (Google) and Copilot Pro (Microsoft) as independent judges.
Methods: reported use of Gemini Pro and Copilot Pro as independent judges for the ceiling probe.
Primary blind scoring was performed by Claude (Anthropic) used as an LLM judge.
Methods: primary blind scoring explicitly performed by Claude.
Re-administration under declared conditions produced zero delta across all 16 dimension-pair comparisons (no measurable change when declaration status changed).
Reported repeated-measures comparisons across 16 predefined dimension pairs between blind and declared administrations, with reported zero delta.