Evidence (4137 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Governance
Remove filter
AI applications span the full drug development pipeline, including target discovery, in silico screening and de novo design, preclinical safety models, clinical trial design and patient selection/monitoring, and post-marketing surveillance.
Comprehensive literature synthesis across preclinical, clinical, and post-marketing sources in the narrative review summarizing documented uses across these stages.
Suggested metrics for researchers and investors to monitor include R&D cycle time, cost per IND/NDA, proportion of projects using AI, success rates at development stages, market concentration measures, and investment flows into AI-enabled biotech vs incumbents.
Recommendations made in the Implications section as metrics to watch; no empirical tracking or baseline measures provided.
Limitations of the analysis include limited empirical validation of archetypes or impacts and potential selection bias toward prominent firms and technologies.
Explicit limitations stated in the Data & Methods section of the paper.
The paper is an editorial/conceptual synthesis rather than a primary empirical study: it uses qualitative analysis and illustrative examples, and reports no new quantitative estimates.
Explicit statement in the Data & Methods section of the paper describing document type, approach, evidence base, and limitations.
Ethical oversight and governance (addressing bias, consent, downstream risks) are critical constraints that must be addressed for AI to generate sustained benefits.
Normative synthesis referencing common ethical concerns; no empirical evaluation of oversight mechanisms in the paper.
Transparency and auditability for model behavior, provenance, and decisions are essential for trustworthy deployment and regulatory acceptance.
Policy and governance synthesis drawing on regulatory dynamics; no empirical study of regulatory outcomes included.
Rigorous model validation and reproducibility across datasets and settings are necessary constraints for successful AI deployment.
Normative claim in the editorial based on reproducibility concerns in ML and biomedical research; no reported validation trials within the paper.
Operators and regulators should prioritize independent model audits, disclosure of data use, fairness/error rates, and field experiments to quantify causal impacts and heterogeneous effects.
Policy recommendations and research priorities summarized in the review based on identified methodological and governance gaps.
Research gaps include the need for robust causal evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies.
Review's recommendations and identified gaps, noting scarcity of RCTs/longitudinal work and calls for standardized outcomes and fairness checks.
Heterogeneous study designs, outcomes, and measures across the literature hinder quantitative meta‑analysis and synthesis of effectiveness.
Review states heterogeneity of designs and outcome measures as a limitation preventing meta‑analysis.
Typical data used in studies are platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and in some cases limited self‑report measures.
Review summary of data sources across included studies listing platform logs and metadata as primary inputs to algorithms.
Evaluation approaches in the reviewed literature varied widely, with many studies using retrospective accuracy metrics (AUC, precision/recall) rather than causal impact measures on harm reduction.
Methods synthesis in review: prevalence of supervised/unsupervised ML with retrospective performance reporting; few RCTs or field experiments reported.
Four primary application areas were identified: (1) behavioural monitoring and feedback, (2) predictive risk modelling, (3) decision support and AI classifiers, and (4) limit‑setting and self‑exclusion tools.
Thematic synthesis of included studies categorizing described applications into four main areas (review taxonomy).
Searches were performed in Web of Science, PubMed, Scopus, EBSCO and IEEE, plus manual searches, following PRISMA guidelines.
Methods section of the review specifying databases searched and PRISMA-guided review process.
The review included 68 empirical and methodological studies on deep technologies in online gambling.
Systematic review following PRISMA; searches of Web of Science, PubMed, Scopus, EBSCO, IEEE and manual searching produced 68 included studies (count reported in paper).
The study uses a game-theoretic model involving a foundation model provider and two competing downstream firms to analyze how policy interventions affect consumer surplus in the AI supply chain.
Methodological description in the paper: a formal game-theoretic model with one upstream provider and two downstream competing firms; equilibrium analysis and comparative statics are performed on model outcomes (prices, qualities, profits, consumer surplus).
Foi realizada etnografia organizacional orientada ao SCF, com roteiro e triangulação de evidências.
Método qualitativo divulgado no resumo: etnografia organizacional com roteiro e triangulação; o resumo não fornece número de organizações, duração ou amostragem.
Foi construído e validado um instrumento psicométrico (escala SCF-30) e calculado um índice 0–100, com modelagem por Equações Estruturais (SEM) e testes de confiabilidade/validade.
Descrição metodológica explícita no resumo: construção e validação da escala SCF-30, uso de SEM e testes de confiabilidade e validade. O resumo não detalha estatísticas, amostra ou resultados numéricos.
O SCF é operacionalizado por três vetores centrais: Percepção de Complexidade (PC), Aversão ao Risco Institucional (AR) e Inércia Cultural (IC).
Estrutura conceitual e operacional apresentada no artigo; especificação explícita dos três vetores como componentes do construto SCF.
This research conducts a critical analysis of the ethical implications of artificial intelligence in terms of job displacement during the fifth industrial revolution.
Author-declared methodology: a literature-based critical analysis drawing on novel studies and the existing body of literature; no further methodological details (e.g., inclusion criteria, databases searched) provided in the excerpt.
This study analyzes comments and statements from party members in OECD countries from 2016 to 2025 through content analysis, examining media interviews, speeches, and debates.
Description of the study's data and method: content analysis of party member comments and statements drawn from media interviews, speeches, and debates across OECD countries over the 2016–2025 period (sample size and selection details not reported in the excerpt).
The study uses topic modeling on a corpus of over 4,600 academic papers to identify the dominant themes in the economics of AI literature.
Unsupervised topic modeling applied to a compiled corpus of >4,600 papers (authors' described methodology and sample size).
The paper explores risk frameworks, ethical constraints, and policy imperatives related to AI.
Descriptive claim about the paper's analytic content (thematic/policy analysis); no empirical details or measurement approach are given in the abstract.
This paper investigates societal applications of AI across domains such as healthcare, education, accessibility, environmental management, emergency response, and civic administration.
Descriptive statement of the paper's scope and methods (literature review / cross-domain analysis implied); the abstract lists the domains but does not specify empirical procedures or sample sizes.
Chatbot suggestions were artificially varied in aggregate accuracy across treatment conditions from low (53%) to high (100%).
Paper describes experimental manipulation of chatbot suggestion accuracy with aggregate accuracies ranging from 53% to 100%; manipulation method (how suggestions were generated or sampled) described in methods (not fully detailed in excerpt).
Caseworkers in the control condition (no chatbot suggestions) had a mean accuracy of 49%.
Reported experimental outcome: mean accuracy for control group = 49%; based on the randomized experiment using the 770-question benchmark.
We conducted a randomized experiment with caseworkers recruited from nonprofit outreach organizations in Los Angeles.
Paper describes a randomized experiment recruiting caseworkers from nonprofit outreach organizations in Los Angeles; sample size and recruitment details not given in the excerpt.
The benchmark questions have corresponding expert-verified answers.
Paper states benchmark questions have expert-verified answers; verification method and number/credentials of experts not specified in the excerpt.
We created a 770-question multiple-choice benchmark dataset of difficult, but realistic questions that a caseworker might receive.
Paper reports creation of a benchmark dataset containing 770 multiple-choice questions described as difficult and realistic; questions and dataset construction described in methods (no sample-of-questions or external validation details provided in the excerpt).
The study's conclusions draw on three complementary evidence bases: (a) task-level evidence on what generative AI can already do in practice; (b) occupational exposure and complementarity analysis using Philippine labor force data; and (c) firm- and worker-level evidence on AI adoption.
Description of methods and data sources in the paper: task-level capability testing/assessment, analysis of national labor force/occupation data for exposure/complementarity, and firm/worker surveys or qualitative adoption evidence.
There is a need for more longitudinal and cross-country studies to better understand the long-term value creation of ERM in MSMEs.
Authors' conclusion and identified research gaps based on the scope and limitations of the existing literature reviewed (i.e., predominance of cross-sectional or single-country studies).
The paper explains the main legal frameworks that currently regulate AI in India, as well as proposals for future legislation.
Author's legal and policy analysis / document review of existing statutes and proposed laws (qualitative review). No quantitative sample size; based on review of legal texts and policy proposals cited in the article.
A “macro approach” that (1) directly models equilibrium behavior of large employers, (2) combines macro data with empirical estimates of employers’ responses (from the micro approach) to estimate the model, and (3) uses the model to compute aggregate costs of monopsony and optimal policies, is the appropriate methodological response.
Methodological proposal set out by the paper; this is a description of the authors' recommended empirical/theoretical strategy rather than an empirical finding. The excerpt contains no implementation details, datasets, or estimation results.
The traditional theoretical and empirical “micro approach” to studying labor market power requires that firms are small and atomistic.
Conceptual/theoretical characterization of the micro approach stated by the paper; no empirical sample, dataset, or formal model provided in the excerpt.
The review focuses on AI applications within small‑scale business environments, with a special focus on women‑owned micro firms in Jaipur, India.
Scope and aim articulated in the paper; geographic and demographic focus explicitly stated by the authors.
The systematic review follows PRISMA 2020 guidelines.
Methodological statement in the paper indicating adherence to PRISMA 2020 for the review process.
After screening and eligibility filtering, 55 open‑access journal articles were included for in‑depth analysis.
PRISMA‑guided screening and eligibility process reported in the review; final included sample explicitly stated as 55 open‑access journal articles.
A Scopus search identified 265 records using keywords related to women’s entrepreneurship and AI.
Systematic literature search reported in the paper following PRISMA 2020; search executed in Scopus with specified keywords; initial yield stated as 265 records.
This research examined three countries (China, the United States, and Germany) using panel vector autoregressive (panel VAR) and difference-in-differences (DID) methods to assess how technology and public policy interventions affect emissions reductions.
Study design reported in the paper: sample of three countries (China, US, Germany) and application of panel VAR and DID methods; specific time period and sample size not provided in the summary.
Social assistance (SA) is defined here as noncontributory social transfers (including cash, vouchers, or in-kind transfers to families or individuals, including the elderly), public works programs, fee waivers, and subsidies.
Explicit definitional statement in the introduction (authors' operational definition for the chapter).
This chapter focuses on low- and middle-income countries (LMICs) and uses a 'review of reviews' approach to summarize the policy discourse and evidence on social protection and gender in adulthood, concentrating on social assistance, social care, and social insurance.
Methodological and scope statement explicitly given in the introduction (author-declared approach and focus).
This study draws on a critical AI media literacy framework to analyze user-generated discussions in the two largest higher education subreddits on Reddit.com.
Author-reported study design: application of a critical AI media literacy theoretical framework to a qualitative dataset consisting of user-generated discussions from the two largest higher-education subreddits. (Sample size/number of posts/threads not specified in the provided excerpt.)
The study used a mixed-methods design incorporating surveys from 150 LEP immigrants, interviews with 50 employers, and interviews with 20 translation service providers in various linguistically diverse U.S. cities, with quantitative analysis performed in SPSS Version 28 and qualitative thematic coding in NVivo 14.
Reported study design and sample: survey n=150 LEP immigrants; employer interviews n=50; translation provider interviews n=20; analytic software specified as SPSS v28 (quantitative) and NVivo 14 (qualitative).
The essay reviews seven books from the past dozen years by social scientists examining the economic impact of artificial intelligence (AI).
Qualitative book-review performed by the author; sample size explicitly stated as seven books published within the last ~12 years; method = synthesis/assessment of those seven books.
This systematic review follows PRISMA guidelines to examine the evolution, advancements, and state-of-the-art AI applications for GS-BESS optimization.
Methodological statement in the paper indicating the use of PRISMA guidelines for the review process. The excerpt does not include the PRISMA flow diagram or the exact article selection numbers.
Definitions and scopes of Material Passports vary among authors.
Content analysis of the 46 included studies showing differing definitions and scope treatments for MPs reported by the authors.
Among the included studies, 65% focused primarily on Material Passports (MPs), while 35% addressed MPs within the broader context of a circular economy (CE).
Quantitative categorization of the 46 included studies reported in the paper (percentages attributed to focus areas).
A total of 54 peer-reviewed articles and book chapters were screened from the Scopus database, of which 46 were included for in-depth analysis in April 2025.
Reported screening and inclusion counts from the Scopus search (54 screened, 46 included); date of in-depth analysis given as April 2025.
This article presents a Systematic Literature Review (SLR) following the PRISMA methodology.
Stated methodology in the paper: SLR using PRISMA; literature search performed in Scopus; review process and inclusion/exclusion described (screening and inclusion counts reported).
The study tracked participants in a three-wave panel totaling over 1,500 workers.
Abstract reporting a three-wave panel design and a sample size of over 1,500 workers.