Evidence (1902 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Skills Training
Remove filter
Recommended analysis methods are qualitative (semi-structured interviews, focus groups, document review) and quantitative (surveys, competency mapping, statistical analysis of outcomes), plus systematic audit methods including traceability checks.
Paper's methods section (methodological specification).
Data inputs for the framework should include competency taxonomies, labor-market signals, regulatory requirements, learner assessment results, and stakeholder interviews.
Paper's data-input specification (descriptive).
Management principles emphasised are transparency, traceability of outcomes, IT integration for documentation, and continuous monitoring/evaluation.
Explicit management principles in paper (prescriptive).
Research and audit should emphasise validity, reliability, and compliance using mixed methods (qualitative interviews/focus groups; quantitative surveys/statistics) and systematic curriculum audits.
Recommended research & audit approach in paper (methodological guidance).
Tools recommended include logigrams (visual decision/compliance flows) and algorigram (algorithmic step-flows for planning, assessment, audit).
Tool definitions and recommendations in paper (descriptive).
Core components of the framework are inputs (learner needs, industry requirements, regulatory standards), processes (curriculum mapping, competency alignment, career assessment), and outputs (structured lesson plans, compliance-ready frameworks, career-path documentation).
Framework component list provided in paper (descriptive).
Scope of the program includes curriculum design, organisational management, career-alignment, and audit/compliance processes.
Explicit scope statement in paper (descriptive).
The framework foregrounds logical modelling (logigrams, algorigrams) and mixed-methods data analysis to support design, auditability, and alignment with industry and regulatory standards.
Paper's methodological design and tool recommendations (conceptual). No empirical implementation data reported.
The program offers a comprehensive curriculum-engineering framework linking organizational orientation, management systems, lesson planning, and career assessment into traceable, compliance-ready curriculum products.
Paper's program description and framework specification (conceptual); no empirical evaluation or sample size reported.
The paper calls for subsequent quantitative validation (using task-based, matched employer-employee, and provider-level panel data) to estimate causal impacts on productivity, health outcomes, wages, and employment composition across the three interaction levels.
Stated research agenda and measurement recommendations in the paper's discussion section.
The study is qualitative and small-sample (four case) and therefore interpretive and illustrative rather than statistically generalizable.
Explicit methodological statement in the paper: design = qualitative multiple case study, sample = four AI healthcare applications.
The study identifies a three-level taxonomy of human–AI interaction in healthcare: AI-assisted, AI-augmented, and AI-automated.
Conceptual taxonomy derived from multiple qualitative case studies (n=4) using cross-case comparison and Bolton et al. (2018)'s three-dimensional service-innovation framework.
Few longitudinal or randomized studies were found, which limits the evidence base for causal claims about digital transformation's effect on productivity.
Review recorded a limited number of longitudinal analyses and quasi-experimental designs among the 145 studies; randomized studies were scarce or absent.
Measurement heterogeneity across studies includes self-reported productivity, output-per-worker metrics, and process efficiency indicators.
Extraction of productivity indicators from included studies (detailed in Methods/Extraction fields) showed multiple distinct measurement approaches.
There is a lack of standardized instruments and inconsistent controls for confounding factors across studies, limiting causal inference about the effect of digital transformation on productivity.
Review extraction documented varied instruments/measures and inconsistent adjustment for confounders across the included studies; few randomized or robust longitudinal designs were found.
Heterogeneous definitions of 'digital transformation' and a variety of productivity measurement approaches prevented a formal quantitative meta-analysis.
Extraction found wide variation in how digital transformation and productivity were defined and measured across the 145 studies (self-reported productivity, output per worker, process efficiency metrics, etc.), leading authors to forgo meta-analysis.
535 records were identified across Scopus, Web of Science, ScienceDirect, IEEE Xplore, and Google Scholar, of which 145 met PRISMA 2020 inclusion criteria.
Search and screening procedure documented in the review: initial database searches yielded 535 records → duplicates removed → screening → full-text evaluation → 145 included studies.
Non-probability sampling and self-reported measures limit claims about prevalence and causality; cross-sectional design cannot capture dynamics of skill acquisition over time.
Study limitations explicitly reported by authors: non-probability sampling, self-reported measures, and cross-sectional design.
The study is primarily diagnostic and prescriptive rather than empirical: no explicit empirical dataset, causal identification strategy, or statistical estimation is reported.
Methods section of the paper explicitly characterizes the work as conceptual, systems-oriented, and not reporting empirical evaluation data.
Research recommendation: invest in longer-run, rigorous impact evaluations (RCTs, panel studies) and system-level assessments to capture spillovers and sustainability outcomes.
Authors' stated research agenda based on identified methodological gaps (limited long-term and system-level evidence) in the review.
There is variation in study design and quality in the evidence base (RCTs, quasi-experimental studies, observational case studies, pilots).
Methodological caveats noted by the authors summarizing the diversity of designs reported across reviewed studies.
The review used a structured literature review with thematic synthesis and a comparative effect-size analysis to quantify ranges for yield, cost, and efficiency outcomes.
Authors' description of review approach and analytical methods in the Data & Methods section.
The evidence base reviewed comprises more than 60 peer-reviewed articles and institutional reports from 2020–2025, primarily focusing on Sub-Saharan Africa.
Statement in the paper's Data & Methods section describing the scope and composition of the review sample.
Effect sizes and impacts vary substantially across contexts—by crop, farm size, and institutional setting.
Comparative synthesis across studies showing heterogeneity in reported outcomes and authors' methodological caveats highlighting context dependence.
Technologies assessed in the review include predictive analytics, digital advisory systems, smart irrigation, pest/disease detection, and precision fertilization.
Descriptive synthesis of the types of AI and digital technologies evaluated across the >60 reviewed articles and reports (2020–2025).
These quantitative performance figures come from case‑level, high‑performer pilots and should not be treated as typical industry benchmarks.
Authors' caveat based on the composition of evidence in the review (skew towards pilots and selected advanced implementations; limited longitudinal/multi‑project empirical studies).
Inter‑rater reliability for the study selection/encoding was Cohen’s κ = 0.83 (substantial agreement).
Reported inter‑rater reliability statistic from the review's quality control step (Cohen's kappa = 0.83).
The review screened 463 Scopus records (2018–2026) and selected 160 peer‑reviewed studies using a PRISMA‑guided process.
Systematic literature review described in paper: Scopus search (2018–2026), PRISMA screening and eligibility filtering; initial n=463, final n=160.
The study has potential selection and ecological-validity constraints because it was conducted at two institutions across six courses, limiting generalizability.
Authors note limitations regarding sample scope (two institutions, six courses) and the ecological validity of the experimental tasks/settings.
The study employed a multi-method approach combining experimental quantitative analysis (descriptives, GLM, non-parametric robustness checks) with qualitative topic-based coding of open-ended survey responses.
Methods description: randomized/experimental assignment; quantitative analyses using GLM and non-parametric tests; qualitative topic-based coding of student responses; sample N = 254 across six courses at two institutions.
The study did not directly measure accessibility or impacts on students with disabilities, though qualitative results suggest possible intersections with inclusive and multimodal learning design.
Limitation stated by authors: no direct measurement of accessibility outcomes; qualitative responses hinted at potential relevance to inclusive design but no empirical measurement of disability-related impacts.
The study focused on short-term, knowledge-based tasks and did not measure long-term learning or retention.
Authors explicitly note as a limitation that the experimental tasks were short-term and knowledge-based and that long-term retention was not measured.
The paper does not provide quantitative estimates of time saved per report, cost reductions, or effects on employment/wages; such economic impacts remain to be quantified.
Caveats noted in the paper: absence of quantitative estimates for time/cost/employment effects and a call for field trials and economic modeling. This is explicitly stated in the summary.
The paper used a clinically grounded, multi-level evaluation framework that separately assessed raw AI drafts (automatic metrics + clinician review) and radiologist-AI collaborative final reports (how radiologists edit and downstream clinical effects), including comparisons across radiologist experience levels.
Methodology section summarized in the paper: multi-level assessment covering AI drafts and radiologist-edited collaborative reports; combination of automatic metrics and radiologist-/clinician-centered evaluations; experience-level stratified analyses (novice/intermediate/senior).
CBCTRepD is a report-generation system trained on this curated paired dataset to produce bilingual CBCT radiology draft reports intended for radiologist-in-the-loop (co-authoring) workflows.
System description in the paper: CBCTRepD built using the curated dataset; authors state purpose is to generate clinically usable drafts for radiologist editing. (Model architecture and training hyperparameters are not specified in the provided text.)
The authors curated a paired CBCT–report dataset of approximately 7,408 CBCT studies covering 55 oral and maxillofacial disease entities that is bilingual and includes diverse acquisition settings.
Data curation described in the paper: stated dataset size (~7,408 studies), coverage of 55 disease entities, bilingual reports, and inclusion of a range of acquisition settings to increase heterogeneity and clinical realism. (Exact languages, provenance of studies, and dataset split details are not specified in the provided text.)
Evaluation was performed on five different material setups.
Experimental evaluation described in the summary: performance reported as averaged across five material setups. The summary does not list per-setup names or trial counts.
The simulation models samples as collections of spheres with per-sphere procedurally generated dislodgement-force thresholds derived from Perlin noise to introduce spatial heterogeneity and diversity.
Simulation/modeling description in the paper: discrete-sphere representation of sample; each sphere assigned a dislodgement threshold; spatial variation produced via Perlin noise. This is a concrete modeling choice reported in the methods.
The paper uses a mixed-methods approach combining a systematic literature review with an empirical practitioner survey to assess perceptions, adoption, and impact of AI-driven tools.
Methodological statement in the paper; survey design covers tool usage, perceived benefits, challenges, and expectations.
The authors recommend specific measurement metrics and empirical research priorities (e.g., MAPE, stockout frequency, inventory turns, lead times, fill rates, total supply chain cost, service-level volatility, resilience measures; causal studies like diff-in-diff or randomized interventions).
Explicit recommendations in the paper's measurement and research agenda sections.
The study's small sample size and qualitative design limit external generalizability and prevent causal effect size estimation; potential selection and reporting biases exist due to purposive sampling and interview-based data.
Authors explicitly state these limitations in the paper's limitations section.
The study is a qualitative multi-case study of five medium-to-large organizations, using semi-structured interviews across procurement, production planning, inventory management, and distribution, analyzed via cross-case comparison.
Methods section description provided by the authors (sample size n = 5, sectors, interview-based primary data, cross-case analysis).
There is limited empirical causal evidence linking specific explanation types to long-term outcomes (safety, fairness, economic performance) in real-world deployments.
Meta-level finding of the review: authors report gaps in the literature—few causal or longitudinal studies of explanation interventions in deployed, high-stakes settings.
The literature groups explainability impacts along three linked dimensions — user trust, ethical governance, and organizational accountability.
Analytical result of the review's thematic coding and synthesis across interdisciplinary literature (categorization derived from the reviewed corpus).
The paper is primarily theoretical and prescriptive: it synthesizes literature and proposes a framework and design guidelines rather than reporting large-scale empirical datasets or causal identification of economic outcomes.
Meta-claim about the paper's methods explicitly stated in the Data & Methods summary; based on the paper's methodological description.
Key measurable outcomes to assess Human–AI teams include accuracy/efficiency, robustness to novel cases, decision consistency, trust/misuse rates, training costs, and inequity indicators.
Prescriptive list of metrics offered by the authors as part of the research agenda and evaluation guidance; not empirically derived from a dataset in the paper.
Empirical evaluation strategies for Human–AI teams should include randomized interventions, field trials, lab experiments, phased rollouts (difference-in-differences), and structural models that allow interaction terms between human skill and AI quality.
Methodological recommendation in the paper; suggested study designs rather than implemented analyses.
Research priorities include empirical measurement of task‑level automation rates, firm and industry productivity effects, wage impacts across occupations, and diffusion patterns.
Paper's stated research agenda and identification of measurement gaps; based on methodological critique of current evidence base.
Measuring these productivity gains will be challenging because quality improvements, faster iteration, and creative outputs are harder to price/observe than lines of code.
Methodological argument about measurement difficulty; based on conceptual considerations, not empirical validation.
The study uses a quantitative, cross-sectional survey-based research design of managers and educational administrators and employs descriptive statistics, correlation, and regression analyses.
Methods described in the summary explicitly state research design and analytical techniques; this is a methodological claim rather than an empirical substantive finding. (Sample size not provided in summary.)