Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
The available evidence consists mainly of promising empirical studies and case studies, but there are few long-run, generalized ROI or productivity estimates; results are heterogeneous across therapeutic areas.
Self-described limitation of the narrative review: heterogeneity of study designs and outcomes precluded pooled quantitative estimates and long-run ROI assessment.
AI applications span the full drug development pipeline, including target discovery, in silico screening and de novo design, preclinical safety models, clinical trial design and patient selection/monitoring, and post-marketing surveillance.
Comprehensive literature synthesis across preclinical, clinical, and post-marketing sources in the narrative review summarizing documented uses across these stages.
Current evidence is illustrative rather than systematic; there is a lack of long-run, quantitative measures of AI’s effect on late-stage clinical outcomes in the literature reviewed.
Explicit methodological statement in the paper: study is an expert/opinion synthesis and narrative review with no new causal econometric estimates or primary experimental data.
Suggested metrics for researchers and investors to monitor include R&D cycle time, cost per IND/NDA, proportion of projects using AI, success rates at development stages, market concentration measures, and investment flows into AI-enabled biotech vs incumbents.
Recommendations made in the Implications section as metrics to watch; no empirical tracking or baseline measures provided.
Limitations of the analysis include limited empirical validation of archetypes or impacts and potential selection bias toward prominent firms and technologies.
Explicit limitations stated in the Data & Methods section of the paper.
The paper is an editorial/conceptual synthesis rather than a primary empirical study: it uses qualitative analysis and illustrative examples, and reports no new quantitative estimates.
Explicit statement in the Data & Methods section of the paper describing document type, approach, evidence base, and limitations.
Ethical oversight and governance (addressing bias, consent, downstream risks) are critical constraints that must be addressed for AI to generate sustained benefits.
Normative synthesis referencing common ethical concerns; no empirical evaluation of oversight mechanisms in the paper.
Transparency and auditability for model behavior, provenance, and decisions are essential for trustworthy deployment and regulatory acceptance.
Policy and governance synthesis drawing on regulatory dynamics; no empirical study of regulatory outcomes included.
Rigorous model validation and reproducibility across datasets and settings are necessary constraints for successful AI deployment.
Normative claim in the editorial based on reproducibility concerns in ML and biomedical research; no reported validation trials within the paper.
The paper is primarily discursive and invitational: it opens a dialogue and proposes a research agenda rather than providing definitive empirical answers.
Stated methodological stance and limits: conceptual/philosophical analysis, interdisciplinary literature synthesis, qualitative/illustrative examples, and explicit note of no systematic empirical evaluation.
Operators and regulators should prioritize independent model audits, disclosure of data use, fairness/error rates, and field experiments to quantify causal impacts and heterogeneous effects.
Policy recommendations and research priorities summarized in the review based on identified methodological and governance gaps.
Research gaps include the need for robust causal evaluations (RCTs, field experiments), standardized metrics, transparency/interpretability, fairness analysis, and cross‑jurisdictional studies.
Review's recommendations and identified gaps, noting scarcity of RCTs/longitudinal work and calls for standardized outcomes and fairness checks.
Heterogeneous study designs, outcomes, and measures across the literature hinder quantitative meta‑analysis and synthesis of effectiveness.
Review states heterogeneity of designs and outcome measures as a limitation preventing meta‑analysis.
Typical data used in studies are platform behavioural logs (bets, stakes, timestamps, session durations), account metadata, and in some cases limited self‑report measures.
Review summary of data sources across included studies listing platform logs and metadata as primary inputs to algorithms.
Evaluation approaches in the reviewed literature varied widely, with many studies using retrospective accuracy metrics (AUC, precision/recall) rather than causal impact measures on harm reduction.
Methods synthesis in review: prevalence of supervised/unsupervised ML with retrospective performance reporting; few RCTs or field experiments reported.
Four primary application areas were identified: (1) behavioural monitoring and feedback, (2) predictive risk modelling, (3) decision support and AI classifiers, and (4) limit‑setting and self‑exclusion tools.
Thematic synthesis of included studies categorizing described applications into four main areas (review taxonomy).
Searches were performed in Web of Science, PubMed, Scopus, EBSCO and IEEE, plus manual searches, following PRISMA guidelines.
Methods section of the review specifying databases searched and PRISMA-guided review process.
The review included 68 empirical and methodological studies on deep technologies in online gambling.
Systematic review following PRISMA; searches of Web of Science, PubMed, Scopus, EBSCO, IEEE and manual searching produced 68 included studies (count reported in paper).
Recommendation (research): Future research should link AI adoption to objective performance metrics (profitability, default rates, processing times) and use longitudinal or quasi-experimental designs to identify causal effects.
Authors' suggested research directions noted in the summary, motivated by limitations of cross-sectional, self-reported data.
The summary omits important reporting details: p-values, standard errors, model control variables, and exact variable operationalizations are not provided.
Explicit reporting gap noted in the paper summary (absence of p-values, SEs, controls, and operationalization details).
Because the data are cross-sectional and self-reported, the design limits causal inference about AI adoption causing the observed outcomes.
Study design (cross-sectional survey, self-reported measures) and explicit limitation noted in the paper summary.
Key measures are self-reported Likert scales for AI adoption/usage and the dependent outcomes (financial decision-making efficiency, operational efficiency, financial resilience, and AI-based analytics effectiveness).
Measurement description in Methods: independent and dependent variables reported as self-reported Likert measures collected in the cross-sectional survey.
The study is a cross-sectional quantitative survey of 312 professionals in banks, fintechs, and financial service firms.
Study design and sample description reported in Data & Methods; sample size explicitly given as N = 312 and composition described as professionals across financial institutions, fintech organizations, and financial service companies.
The SKILL.md used in the with-skill condition encodes workflow logic, API patterns, and business rules as portable domain guidance for agents.
Paper description of the with-skill intervention specifying the content and intended role of SKILL.md.
We evaluated open-weight models under two conditions: baseline (generic agent with tool access but no domain guidance) and with-skill (agent augmented with a portable SKILL.md document encoding workflow logic, API patterns, and business rules).
Experimental design in paper describing the two agent conditions; SKILL.md described as the injected domain guidance artifact.
Each scenario is grounded in live mock API servers with seeded production-representative data, MCP tool interfaces, and deterministic evaluation rubrics combining response content checks, tool-call verification, and database state assertions.
Methods/benchmark design described in paper specifying environment: live mock APIs, seeded data, MCP tool interfaces, and deterministic evaluation combining content checks, tool-call verification, and DB assertions.
SKILLS comprises 37 telecom operations scenarios spanning 8 TM Forum Open API domains (TMF620, TMF621, TMF622, TMF628, TMF629, TMF637, TMF639, TMF724).
Framework specification in the paper; explicit statement of scenario count (37) and list of 8 TMF Open API domains.
We introduce SKILLS (Structured Knowledge Injection for LLM-driven Service Lifecycle operations), a benchmark framework for telecom operations.
Paper describes the design and release of the SKILLS benchmark framework as the contribution; methods section outlines framework components and usage.
The paper identifies three core mechanisms underlying calibrated trust and complementarity: (1) calibrated trust balancing reliance and oversight, (2) complementarity–trust interaction for optimal performance, and (3) dynamic feedback loops producing reinforcing learning cycles.
Explicit identification of mechanisms claimed in the paper's synthesis; this is a descriptive claim about the paper's content rather than an empirical finding—no sample or empirical test reported in the abstract.
AI-adopting firms do not increase capital expenditures following adoption.
Firm-level capex analysis showing no significant change in capital expenditures for adopters versus nonadopters post-adoption in the paper's empirical framework.
The Planner is trained via Supervised Fine-Tuning (SFT) to internalize diagnostic capabilities and then aligned with business outcomes (conversion rate) via Reinforcement Learning (RL).
Method description in the paper specifying SFT initialization followed by RL alignment targeting conversion rate (UCVR) as reward signal.
EASP's Offline Data Synthesis stage: a Teacher Agent synthesizes diverse, execution-validated plans by diagnosing the probed environment.
Method description in the paper detailing the Teacher Agent's role in synthesizing execution-validated plans during offline data synthesis.
The Probe-then-Plan mechanism uses a lightweight Retrieval Probe to expose the retrieval snapshot, enabling the Planner to diagnose execution gaps and generate grounded search plans.
Methodological description in the paper: design and implementation of Retrieval Probe and Planner; validated through synthesized data and downstream evaluations (offline and online).
Descriptive statistics, reliability tests, regression analysis, and structural equation modelling (SEM) were employed to analyse the relationships between AI adoption and entrepreneurial outcomes.
Methods section reporting use of descriptive statistics, reliability tests, regression analysis, and SEM to evaluate relationships between AI adoption and measured outcomes.
The study used a quantitative research design and collected data from 350 entrepreneurs and managers of small and medium-sized enterprises (SMEs) who had adopted AI in their business operations.
Methods section of the paper specifying a quantitative design and a sample size of 350 AI-adopting SME entrepreneurs/managers.
The study used portfolio-level analysis to compare the financial outcomes of portfolios constructed using AI-driven ESG indicators with those based on conventional ESG ratings.
Methodological statement in the paper: portfolio-level analysis and comparative design. The summary does not specify the number of portfolios, asset universes, time frame, or construction rules.
A quantitative methodology was employed, utilizing a structured questionnaire administered to 400 small business owners.
Explicit methodological statement in the paper: structured questionnaire survey with sample size N=400 small business owners.
Foi realizada etnografia organizacional orientada ao SCF, com roteiro e triangulação de evidências.
Método qualitativo divulgado no resumo: etnografia organizacional com roteiro e triangulação; o resumo não fornece número de organizações, duração ou amostragem.
Foi construído e validado um instrumento psicométrico (escala SCF-30) e calculado um índice 0–100, com modelagem por Equações Estruturais (SEM) e testes de confiabilidade/validade.
Descrição metodológica explícita no resumo: construção e validação da escala SCF-30, uso de SEM e testes de confiabilidade e validade. O resumo não detalha estatísticas, amostra ou resultados numéricos.
O SCF é operacionalizado por três vetores centrais: Percepção de Complexidade (PC), Aversão ao Risco Institucional (AR) e Inércia Cultural (IC).
Estrutura conceitual e operacional apresentada no artigo; especificação explícita dos três vetores como componentes do construto SCF.
This research conducts a critical analysis of the ethical implications of artificial intelligence in terms of job displacement during the fifth industrial revolution.
Author-declared methodology: a literature-based critical analysis drawing on novel studies and the existing body of literature; no further methodological details (e.g., inclusion criteria, databases searched) provided in the excerpt.
This study uses panel data on agricultural firms listed on the Shanghai and Shenzhen A-share markets from 2007 to 2023 and applies a multidimensional fixed-effects model to estimate the impact of AI on firms’ total factor productivity (TFP).
Methodological statement in the paper: dataset = panel of listed agricultural firms (Shanghai and Shenzhen A-share markets), time period 2007–2023; empirical approach = multidimensional fixed-effects model.
Degree, betweenness, and eigenvector centrality metrics were used to identify structural vulnerabilities and leverage points in the construction supply chain network.
Paper reports calculation of degree, betweenness, and eigenvector centrality to outline vulnerabilities; specific metrics and interpretations are reported (e.g., degree centrality value for brokers).
Thematic coding translated reported interactions into nodes and edges of a complex network and grouped challenges into thematic categories.
Methods described: thematic coding applied to interview data to create network structure and to generate challenge categories (six main categories, 16 open codes reported).
This study combines empirical, semi-structured interviews with social network analytics to map construction supply chain relationships and vulnerabilities.
Methods reported in the paper: use of semi-structured interviews plus social network analysis (thematic coding to create nodes/edges, calculation of network metrics). Sample size not specified in the abstract.
Distinguishing between base models and fine-tuned systems is important for researchers using LLMs to study cultural patterns, because fine-tuning and alignment can change the behaviors relevant to behavioral research.
Analytical distinction and methodological guidance in the paper; claim grounded in conceptual reasoning about model development workflows rather than a specific experimental demonstration in the excerpt.
Contemporary artificial intelligence research has been organized around two dominant ambitions: productivity (treating AI systems as tools for accelerating work and economic output) and alignment (ensuring increasingly capable systems behave safely and in accordance with human values).
Literature synthesis and conceptual framing within the paper (review of prevailing research agendas and priorities in AI literature). No original empirical sample or experiment reported for this claim in the provided text.
The study contributes to the literature by integrating evidence across higher education, vocational training, and lifelong learning to emphasize the need for balanced policy approaches to skill formation.
Stated contribution in the paper: cross-pathway synthesis of existing empirical evidence and secondary data (methods described as comparative synthesis; no primary empirical contribution reported in the summary).
The study uses secondary data and comparative evidence from prior empirical studies to analyze relationships between higher education, vocational education, and lifelong learning.
Stated methodology in the paper: analysis of secondary data and synthesis of prior empirical/comparative studies (no primary data collection; no sample sizes reported).
The paper explores risk frameworks, ethical constraints, and policy imperatives related to AI.
Descriptive claim about the paper's analytic content (thematic/policy analysis); no empirical details or measurement approach are given in the abstract.