Evidence (1902 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Skills Training
Remove filter
The study employs an input–output (I–O) modeling framework using IMPLAN 2022 data to estimate direct, indirect, and induced impacts of investments in greenhouse and robotics sectors for Northwest Indiana as part of Project TRAVERSE.
Explicit methodological statement in the paper: use of IMPLAN 2022 I–O model; geographic scope NWI; linkage to EDA Project TRAVERSE.
We extract the Big 5 personality traits from facial images of 96,000 MBA graduates using advances in AI and LinkedIn microdata.
Methodological claim reported in the paper: AI-based model applied to facial images linked to LinkedIn microdata for a sample of 96,000 MBA graduates; extraction yields 'Photo Big 5' trait scores.
The study is limited by the scope of available industry data and the generalisability of case study findings.
Explicit limitation reported in the paper summary stating constraints related to industry data availability and generalisability of case studies.
The research adopts a mixed-method approach, combining theoretical analysis with empirical insights, and uses data gathered from the 'AI-driven transformation' Scopus database.
Explicit methodological statement in the paper summary: mixed-method design and Scopus database as the data source. (No further methodological details or sample counts provided in the summary.)
Future research could strengthen causal identification by exploiting exogenous policy shocks rather than relying solely on matching methods like PSM.
Authors' methodological suggestion for future work, based on limitations of current causal inference strategy (PSM and observational panel regression).
Propensity Score Matching (PSM) and other robustness checks were used to mitigate selection bias and support the causal interpretation of AI's effects.
Paper reports use of Propensity Score Matching in robustness analyses on the panel of A-share-listed design firms (2014–2023).
The paper operationalizes firm-level AI exposure by constructing an AI lexicon via natural language processing and applying text analysis to annual reports and patents to generate enterprise-level AI indicators.
Described methodology: NLP to generate an AI lexicon and text-analysis of annual reports and patents to build AI measures for each listed design enterprise in the 2014–2023 panel.
A composite index capturing concerns about mental health, privacy, climate impact, and labor market disruption was constructed to measure societal risk perceptions of AI.
Author-constructed composite index derived from survey items on mental health, privacy, climate, and labor market disruption concerns in the 2023–2024 UK survey.
The analysis is framed through the integrated lens of the Technology-Organization-Environment (TOE) framework and Institutional Theory to provide a multi-faceted understanding of adoption dynamics.
Stated theoretical framing and analytical approach in the study (methodological claim).
The research synthesizes evidence from a wide array of sources, including recent academic literature by Nigerian scholars, NPA official performance reports, policy documents, and international trade facilitation reports (e.g., UNCTAD).
Explicit description of data sources in the study methodology; method: secondary data synthesis (no sample size applicable).
This study investigates the current state of adoption, the prevailing barriers, and the resultant performance outcomes of digital and AI-driven logistics within Nigeria’s maritime supply chain.
Stated study aim and scope; method: rigorous secondary data analysis drawing on multiple documentary sources (Nigerian academic literature, NPA reports, policy documents, UNCTAD).
This study uses a conceptual and analytical approach to examine the impact of AI and automation on work.
Stated methodology in the paper's abstract/introduction: methodological description that the study is conceptual and analytical; no empirical sample or quantitative data reported.
The study integrates Fuzzy Best Worst Method (BWM), PROMETHEE II, and DEMATEL (Fuzzy BWM-PROMETHEE II-DEMATEL) as a three-stage MCDM framework for prioritization and causal analysis of barriers.
Methodology explicitly described in paper: literature survey + expert knowledge feeding into integrated Fuzzy BWM, PROMETHEE II, and Fuzzy DEMATEL analyses.
This study investigates the barriers to the adoption of Industry 4.0 (I4.0) in the Thai automotive industry to inform firms and policymakers.
Stated research aim in paper; approach based on literature survey and expert knowledge; three-stage multi-criteria decision-making (MCDM) model used. (Sample size of experts / respondents not specified in the provided text.)
The paper's findings are based on a combination of literature review, data analysis, and an empirical study involving HR professionals.
Methodological description given in the paper's summary (no further methodological details, sample size, instruments, or statistical methods provided in the summary).
We conducted preregistered experiments in two tasks (a sentiment-analysis task and a geography-guessing task) to study whether user characteristics influence the effectiveness of AI explanations.
Preregistered experimental studies described in the paper; two distinct tasks (sentiment-analysis and geography-guessing). (Sample sizes and additional procedural details are not provided in the excerpt.)
The framework is depicted across organization areas with primary focus on strategic management and workforce decision-making and secondary focus on finance, operations, and marketing.
Descriptive claim based on the conceptual framework and its mapping to organizational domains within the paper. No empirical application or case studies reported.
This paper outlines a Human–AI Collaborative Decision Analytics Framework integrating five overlapping layers: data, AI analytics, business analytics interpretation, human judgment, and feedback learning.
Presentation of a conceptual framework developed by the authors (conceptual/modeling contribution). No empirical validation reported.
The results presented in the paper are based on a literature recherche, an analysis of individual tasks across different occupations (conducted within Erasmus+ projects), and discussions with trainers/educators.
Methodological statement from the paper; indicates the types of evidence used. The abstract does not provide numbers for analyzed tasks, the number of occupations, details of Erasmus+ projects, or counts of trainers/educators consulted.
The paper identifies key research gaps and proposes a future research agenda focused on human–AI interaction, organizational governance, and ethical accountability.
Conclusions/recommendations from the conceptual meta-analysis (paper-generated research agenda; no empirical testing reported in abstract).
This study presents a conceptual meta-analysis of interdisciplinary literature on AI-augmented decision-making in organizations.
Methodological statement of the paper (the paper itself is a conceptual meta-analysis); no primary empirical sample reported in the abstract.
Research has insufficiently modeled joint distributional outcomes and environmental performance, and lacks integrated evaluation of AI-enabled sustainable finance under heterogeneous disclosure regimes.
Review-level identification of methodological gaps across the surveyed literature (authors' synthesis of existing studies and their limitations).
There is a shortage of long-horizon causal evidence on non-linear coupling between digitalization and decarbonization, limiting robust policy inference.
Meta-level assessment in the review noting gaps in existing empirical literature (review authors' synthesis of the field; claim about research availability rather than primary data).
Competency mapping involves identifying and aligning the critical skills, knowledge, and abilities required for specific job roles.
Definition provided in the paper (conceptual).
A stratified random sampling method was employed to select a representative sample of 500 IT employees, based on a pilot study constituting 0.50 percent of the total population.
Sampling description provided in the methods section: stratified random sampling, sample size = 500, pilot study size referenced as 0.50% of population.
The study analyzes data from the period 2021 to 2023 using Multiple Regression Analysis as the principal analytical technique.
Methods statement provided in the paper (timeframe and analytical method).
The primary objective of this research is to examine the impact of AI adoption on competency mapping practices in the IT sector.
Explicitly stated research objective in the paper.
A Job Digital Intensity Index (JDII) was constructed to capture how digitally intensive jobs are overall, based on the range of digital tasks performed.
Methodological construction described in the report using ESJS digital task items to form a composite JDII.
The 2024 University of Phoenix Career Optimism Index® is a nationally representative survey of 5,000 U.S. workers and 501 employers.
Descriptive/methodological statement in the paper: a nationally representative cross-sectional survey (University of Phoenix Career Optimism Index®) with sample sizes of 5,000 U.S. workers and 501 employers.
Deterministic automated verifiers provide objective pass/fail checks for task success.
Methods section: verifiers are deterministic and automated, enabling objective evaluation of whether an agent's trajectory accomplished the task.
Scale of experiments: seven agent–model configurations and 7,308 execution trajectories were used to compute pass rates and deltas.
Reported experimental scale in Methods: 7 agent–model configurations and a total of 7,308 agent execution traces collected and analyzed across tasks/conditions.
Each task was evaluated under three conditions: (1) no Skills, (2) curated (human-authored) Skills, and (3) self-authored (model-generated) Skills.
Experimental protocol described in Methods: three-arm evaluation per task across the SkillsBench benchmark.
SkillsBench benchmark: evaluates 86 tasks spanning 11 domains with deterministic, automated verifiers.
Dataset and benchmark description in the paper: SkillsBench contains 86 tasks across 11 domains and uses deterministic pass/fail verifiers for objective evaluation.
Research should prioritize dynamic, task-based models that include transitional frictions, heterogeneous agents, and sectoral structure to better measure AI exposure and impacts.
Methodological recommendation grounded in the paper's theoretical critique of static occupation-level automation metrics and noted empirical gaps.
Timing uncertainty and measurement challenges make forecasting the pace and scale of AI-induced employment change inherently uncertain.
Methodological limitations section noting uncertainty in AI adoption speed and difficulties mapping capabilities to tasks and predicting new occupation emergence.
Research agenda: there is a need for causal studies on AI’s impact on accounting labor demand and firm performance, analyses of distributional effects across firm sizes and industries, and evaluation of regulatory frameworks for reliable, interpretable AI in financial reporting.
Author-stated research priorities drawn from gaps identified in the literature review; not an empirical finding.
Policy implications include workforce retraining, standards for AI auditability and transparency, and regulation balancing innovation and controls (privacy, fraud prevention).
Policy recommendations based on identified risks and barriers discussed in the paper rather than empirical policy evaluation.
For stronger causal evidence, recommended empirical methods include difference-in-differences on adopting firms vs. controls, matched samples, and randomized pilots for particular tools, supplemented by qualitative interviews.
Methodological recommendations stated in the paper (not an empirical finding); no implementation/sample reported in the abstract.
Empirical approach measured and compared expectation formation, innovation responses, and pipeline outcomes across local exposure to closures and across distinct entrepreneurial identity groups.
Methodological description: survey-based, cross-country quantitative approach using measures of local exposure (nearby closures), identity classification (family/purpose-driven vs. wealth-driven), and outcomes (expectations, perceived impediments, self-reported innovation, pipeline transitions) in a sample >27,000.
The study analyzes a cross-country sample of more than 27,000 entrepreneurs across 43 countries (survey-based, comparative).
Descriptive claim about the dataset used in the paper: survey-based sample size >27,000 spanning 43 countries as reported in Data & Methods.
The paper's evidence is policy‑oriented, qualitative and analytical; it does not report causal estimates from new field data and produces testable propositions and an empirical agenda instead.
Explicit methods statement in the paper: structured desk review, corridor process mapping, governance gap analysis; absence of field experiments or causal quantitative analysis.
Calibration via Method of Simulated Moments (MSM) matches six empirical moments to discipline mechanism magnitudes.
Model calibration procedure reported in the paper: MSM matching six chosen empirical moments that summarize key pre/post-AI patterns (paper states six moments were used).
The paper highlights governance risks requiring transparency about LLM-derived mappings, mitigation of model biases, privacy-preserving data practices, and careful communication of uncertainty to avoid overconfident policy recommendations.
Explicit discussion of risks and governance considerations in the paper; this is an acknowledgment rather than an empirical claim. No implementation or audit evidence is provided.
Backtesting the architecture on historical automation waves and recent AI introductions will validate model design and calibration.
Paper explicitly proposes backtesting and holdout validation using historical automation episodes and recent AI adoption events; does not report completed backtests or empirical sample sizes.
Evaluations reporting outcomes predominantly relied on learner surveys, knowledge/skill tests, or self‑reported behavior change measures.
Methods of evaluation extracted from the included studies: most used surveys, tests, or self-report measures to assess Kirkpatrick‑Barr levels 1–3.
The study used a cross-sectional quantitative survey (purposive sampling) of pharmaceutical-sector employees in Karnataka, India (N = 350) and analyzed relationships using PLS-SEM (SmartPLS 4.0).
Study design and methods as reported in the paper summary: cross-sectional survey, purposive sampling, N = 350, analysis via Partial Least Squares Structural Equation Modeling (SmartPLS 4.0).
Policy recommendations include: invest in open metadata standards; fund pilot programs to evaluate ROI (earnings, placement, employer satisfaction); require model governance and periodic external audits for AI-assisted curriculum tools; and support smaller providers via shared infrastructure or accreditation hubs.
Explicit policy recommendations in paper (prescriptive).
Careful attention is needed to validity/reliability of assessments and to selection bias in employment outcome measurement.
Paper's methodological caveat (prescriptive); no empirical bias analysis provided.
Suggested evaluation metrics include placement rates, wage premiums, competency attainment, compliance scores, cost per qualification, and update latency.
Paper's recommended evaluation metrics (prescriptive).
Implementation requires integration with information systems for documentation, versioning, metadata, and audit trails, and benefits from continuous monitoring dashboards.
Paper's technical implementation recommendations (prescriptive).