Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
This study identifies three types of AI triggers that target routines, cognitive frameworks, and resource allocation.
Proposed taxonomy / typology presented in the paper (theoretical classification). The claim is descriptive of the paper's contribution rather than empirically validated.
Battery and motor performance were evaluated (in laboratory tests).
Laboratory tests assessing battery and motor performance are reported in the methods/results; no quantitative battery/motor metrics provided in the summary.
A composite index capturing concerns about mental health, privacy, climate impact, and labor market disruption was constructed to measure societal risk perceptions of AI.
Author-constructed composite index derived from survey items on mental health, privacy, climate, and labor market disruption concerns in the 2023–2024 UK survey.
The analysis is framed through the integrated lens of the Technology-Organization-Environment (TOE) framework and Institutional Theory to provide a multi-faceted understanding of adoption dynamics.
Stated theoretical framing and analytical approach in the study (methodological claim).
The research synthesizes evidence from a wide array of sources, including recent academic literature by Nigerian scholars, NPA official performance reports, policy documents, and international trade facilitation reports (e.g., UNCTAD).
Explicit description of data sources in the study methodology; method: secondary data synthesis (no sample size applicable).
This study investigates the current state of adoption, the prevailing barriers, and the resultant performance outcomes of digital and AI-driven logistics within Nigeria’s maritime supply chain.
Stated study aim and scope; method: rigorous secondary data analysis drawing on multiple documentary sources (Nigerian academic literature, NPA reports, policy documents, UNCTAD).
This study uses a conceptual and analytical approach to examine the impact of AI and automation on work.
Stated methodology in the paper's abstract/introduction: methodological description that the study is conceptual and analytical; no empirical sample or quantitative data reported.
The study integrates Fuzzy Best Worst Method (BWM), PROMETHEE II, and DEMATEL (Fuzzy BWM-PROMETHEE II-DEMATEL) as a three-stage MCDM framework for prioritization and causal analysis of barriers.
Methodology explicitly described in paper: literature survey + expert knowledge feeding into integrated Fuzzy BWM, PROMETHEE II, and Fuzzy DEMATEL analyses.
This study investigates the barriers to the adoption of Industry 4.0 (I4.0) in the Thai automotive industry to inform firms and policymakers.
Stated research aim in paper; approach based on literature survey and expert knowledge; three-stage multi-criteria decision-making (MCDM) model used. (Sample size of experts / respondents not specified in the provided text.)
The paper's findings are based on a combination of literature review, data analysis, and an empirical study involving HR professionals.
Methodological description given in the paper's summary (no further methodological details, sample size, instruments, or statistical methods provided in the summary).
The adoption and implementation of AI in entrepreneurial firms is an under-studied area of research.
Paper's literature review and motivation statement asserting limited empirical research on AI adoption in entrepreneurial contexts.
The study collected data from 207 entrepreneurial businesses (including SMEs, startups, and knowledge-based businesses) using a structured questionnaire and analyzed the data using Partial Least Squares Structural Equation Modeling (PLS-SEM) with SmartPLS 3.
Structured questionnaire administered to a sample of 207 entrepreneurial businesses; analysis conducted with PLS-SEM (SmartPLS 3) as reported in the paper.
Data were collected using a structured questionnaire and analyzed using Structural Equation Modeling (SEM).
Explicit methodological statement in the paper's summary.
The study draws extensively on contemporary literature in sustainable supply chain management, healthcare procurement, and ESG governance.
Methodological claim about the paper's research approach: literature review/synthesis across the cited domains (bibliographic evidence within the paper).
A complete evaluation methodology is specified, including baselines and an ablation design.
Paper claims to specify evaluation methodology with baselines and ablation; details presumably in the methods section.
The paper formalizes two testable hypotheses on security coverage and latency overhead.
Explicit statement in the paper that two testable hypotheses are formalized (security coverage and latency overhead); no experimental results shown in the abstract.
The paper empirically analyzes the algorithm-automated versus human decision-making debate using the AST and STS theoretical lenses.
Theoretical analysis and empirical synthesis across the reviewed studies (n=85), explicitly stated use of AST and STS frameworks to interpret findings.
To address the duality of benefits and harms, the paper proposes a dynamic Human-in-the-Loop (HITL) model that reconciles algorithmic determinism with normative HRM demands.
Conceptual/theoretical contribution presented in the paper (proposed HITL model based on synthesis of findings and theory).
There is substantial heterogeneity in effects (I^2 = 74%), indicating variability across studies.
Meta-analytic heterogeneity statistic reported in the paper (I^2 = 74%).
This study analyzes 28 papers (secondary studies and research agendas) published since 2023.
Systematic literature review conducted by the authors of secondary studies and research agendas; sample size explicitly reported as 28 papers; timeframe specified as 'since 2023'.
Three contributions are presented: the Agentic AI Framework (AAF 3.0); a cross-domain synthesis formalising the inverse evidence–complexity relationship; and a phased sociotechnical roadmap integrating governance sequencing, reimbursement reform, and equity safeguards.
Descriptive claim about the paper's outputs. These contributions are stated in the abstract as the study's deliverables based on the narrative review and synthesis of 81 sources.
Agentic AI is defined as autonomous, goal-directed systems capable of multi-step workflow coordination.
Definition provided by the authors within the paper (conceptual framing used for the review).
This structured narrative review of 81 sources (2020–2025) evaluates whether Agentic AI ... can support structural adaptation in ageing health systems.
Methodological statement in the paper: the study is a structured narrative review of 81 sources from 2020–2025.
The framework is depicted across organization areas with primary focus on strategic management and workforce decision-making and secondary focus on finance, operations, and marketing.
Descriptive claim based on the conceptual framework and its mapping to organizational domains within the paper. No empirical application or case studies reported.
This paper outlines a Human–AI Collaborative Decision Analytics Framework integrating five overlapping layers: data, AI analytics, business analytics interpretation, human judgment, and feedback learning.
Presentation of a conceptual framework developed by the authors (conceptual/modeling contribution). No empirical validation reported.
The results presented in the paper are based on a literature recherche, an analysis of individual tasks across different occupations (conducted within Erasmus+ projects), and discussions with trainers/educators.
Methodological statement from the paper; indicates the types of evidence used. The abstract does not provide numbers for analyzed tasks, the number of occupations, details of Erasmus+ projects, or counts of trainers/educators consulted.
Research has insufficiently modeled joint distributional outcomes and environmental performance, and lacks integrated evaluation of AI-enabled sustainable finance under heterogeneous disclosure regimes.
Review-level identification of methodological gaps across the surveyed literature (authors' synthesis of existing studies and their limitations).
There is a shortage of long-horizon causal evidence on non-linear coupling between digitalization and decarbonization, limiting robust policy inference.
Meta-level assessment in the review noting gaps in existing empirical literature (review authors' synthesis of the field; claim about research availability rather than primary data).
Competency mapping involves identifying and aligning the critical skills, knowledge, and abilities required for specific job roles.
Definition provided in the paper (conceptual).
A stratified random sampling method was employed to select a representative sample of 500 IT employees, based on a pilot study constituting 0.50 percent of the total population.
Sampling description provided in the methods section: stratified random sampling, sample size = 500, pilot study size referenced as 0.50% of population.
The study analyzes data from the period 2021 to 2023 using Multiple Regression Analysis as the principal analytical technique.
Methods statement provided in the paper (timeframe and analytical method).
The primary objective of this research is to examine the impact of AI adoption on competency mapping practices in the IT sector.
Explicitly stated research objective in the paper.
This study uses panel data from 30 Chinese provinces (2011–2022) and estimates a spatial simultaneous equations model using the Generalized Spatial Three-Stage Least Squares (GS3SLS) approach.
Described methodology in the paper: panel dataset covering 30 provinces over 2011–2022 (12 years), spatial simultaneous equations estimated by GS3SLS.
Deterministic automated verifiers provide objective pass/fail checks for task success.
Methods section: verifiers are deterministic and automated, enabling objective evaluation of whether an agent's trajectory accomplished the task.
Scale of experiments: seven agent–model configurations and 7,308 execution trajectories were used to compute pass rates and deltas.
Reported experimental scale in Methods: 7 agent–model configurations and a total of 7,308 agent execution traces collected and analyzed across tasks/conditions.
Each task was evaluated under three conditions: (1) no Skills, (2) curated (human-authored) Skills, and (3) self-authored (model-generated) Skills.
Experimental protocol described in Methods: three-arm evaluation per task across the SkillsBench benchmark.
SkillsBench benchmark: evaluates 86 tasks spanning 11 domains with deterministic, automated verifiers.
Dataset and benchmark description in the paper: SkillsBench contains 86 tasks across 11 domains and uses deterministic pass/fail verifiers for objective evaluation.
Research should prioritize dynamic, task-based models that include transitional frictions, heterogeneous agents, and sectoral structure to better measure AI exposure and impacts.
Methodological recommendation grounded in the paper's theoretical critique of static occupation-level automation metrics and noted empirical gaps.
Timing uncertainty and measurement challenges make forecasting the pace and scale of AI-induced employment change inherently uncertain.
Methodological limitations section noting uncertainty in AI adoption speed and difficulties mapping capabilities to tasks and predicting new occupation emergence.
Research agenda: there is a need for causal studies on AI’s impact on accounting labor demand and firm performance, analyses of distributional effects across firm sizes and industries, and evaluation of regulatory frameworks for reliable, interpretable AI in financial reporting.
Author-stated research priorities drawn from gaps identified in the literature review; not an empirical finding.
Policy implications include workforce retraining, standards for AI auditability and transparency, and regulation balancing innovation and controls (privacy, fraud prevention).
Policy recommendations based on identified risks and barriers discussed in the paper rather than empirical policy evaluation.
For stronger causal evidence, recommended empirical methods include difference-in-differences on adopting firms vs. controls, matched samples, and randomized pilots for particular tools, supplemented by qualitative interviews.
Methodological recommendations stated in the paper (not an empirical finding); no implementation/sample reported in the abstract.
Actionable research priorities include running larger-scale field trials linking game use to observed land-use and economic outcomes, developing validation protocols for game-backed models against empirical on-farm data, studying heterogeneity of impacts, and designing incentive mechanisms that leverage game-demonstrated profitability co-benefits.
Synthesis-driven recommendations based on identified evidence gaps—specifically the predominance of small-scale/qualitative studies and lack of long-term/causal evidence.
Rigorous economic evaluation (RCTs, quasi-experiments) is needed to quantify how game-enhanced DSTs affect investment, land-use choices, emissions outcomes, and farm incomes.
Chapter recommendation grounded in observed gaps: the literature lacks sufficiently rigorous causal impact evaluations; current evidence is largely qualitative or observational.
Empirical approach measured and compared expectation formation, innovation responses, and pipeline outcomes across local exposure to closures and across distinct entrepreneurial identity groups.
Methodological description: survey-based, cross-country quantitative approach using measures of local exposure (nearby closures), identity classification (family/purpose-driven vs. wealth-driven), and outcomes (expectations, perceived impediments, self-reported innovation, pipeline transitions) in a sample >27,000.
The study analyzes a cross-country sample of more than 27,000 entrepreneurs across 43 countries (survey-based, comparative).
Descriptive claim about the dataset used in the paper: survey-based sample size >27,000 spanning 43 countries as reported in Data & Methods.
The paper's evidence is policy‑oriented, qualitative and analytical; it does not report causal estimates from new field data and produces testable propositions and an empirical agenda instead.
Explicit methods statement in the paper: structured desk review, corridor process mapping, governance gap analysis; absence of field experiments or causal quantitative analysis.
The empirical strategy uses baseline panel regressions with standard controls (e.g., firm size, performance, leverage) and fixed effects to estimate the AI → pay relationship.
Methods section describing regression specifications including firm controls and fixed effects applied to the A-share firm panel.
Data consist of a panel of Chinese A-share listed companies covering 2007–2023.
Data description in the paper specifying the sample period and population (A-share listed firms, 2007–2023).
The firm-level AI application indicator is constructed via textual analysis of corporate disclosures (e.g., filings/annual reports) to capture AI application intensity.
Methodological description in the paper describing text-based construction of an AI application indicator from corporate disclosures for listed firms in the 2007–2023 sample.