Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
The empirical approach tests for common long-run relationships across patenting series and identifies structural breaks concentrated after 2010.
Description of empirical strategy: time-series econometric analysis of patent filing series (1980–2019) including tests for common long-run relationships (cointegration) and structural break detection. The paper reports results of these tests (presence/absence of common trends and timing of breaks).
The paper highlights governance risks requiring transparency about LLM-derived mappings, mitigation of model biases, privacy-preserving data practices, and careful communication of uncertainty to avoid overconfident policy recommendations.
Explicit discussion of risks and governance considerations in the paper; this is an acknowledgment rather than an empirical claim. No implementation or audit evidence is provided.
Backtesting the architecture on historical automation waves and recent AI introductions will validate model design and calibration.
Paper explicitly proposes backtesting and holdout validation using historical automation episodes and recent AI adoption events; does not report completed backtests or empirical sample sizes.
Empirical validation of the integrated Kondratieff–Schumpeter–Mandel framework requires firm-level adoption and profitability data, sectoral investment series, and cross-country comparisons using panel methods and identification strategies (e.g., diff-in-diff, IV).
Methods/limitations section recommendation (explicitly states no single micro-econometric identification strategy was reported and outlines required data/methods).
The three frameworks (Kondratieff, Schumpeter, Mandel) are complementary: Kondratieff frames periodicity, Schumpeter provides micro-mechanisms of innovation-driven change, and Mandel foregrounds socio-political constraints and distributional outcomes.
Conceptual integration and comparative theoretical analysis (qualitative synthesis).
Kondratieff's framework is useful for identifying broad periodicities (recurring phases of expansion and stagnation) in capitalist development but is less specific about microeconomic mechanisms.
Theoretical review of Kondratieff literature and conceptual assessment (qualitative).
No new laboratory measurements or datasets are reported in the paper; the approach is methodological and conceptual rather than empirical.
Methods section and explicit statements within the paper noting absence of new data; verifiable by reading the paper.
These operators are presented as conceptual/theoretical bridges rather than immediately quantifiable laboratory units.
Explicit methodological statement in the paper emphasizing interpretive/theoretical intent; no empirical operationalization reported.
Paired-game design (baseline and matched decoy-enabled game per interaction) enables direct, causal measurement of deception impact.
Methodological design described in the paper: each interaction modeled as a paired-game enabling direct comparison of equilibrium outcomes (theoretical/method section).
Equilibrium outcomes are linked to an information-theoretic uncertainty construct (entropy-like) that captures residual attacker uncertainty after observation.
Theoretical construction and formal connection drawn in the paper between equilibrium utilities and an entropy-style measure (analytical derivation).
Defender-optimal deception allocations are characterized analytically (closed-form/structural characterization of optimal resource allocation under constraints).
Analytical derivation/proofs in the paper producing defender-optimal strategy characterizations under resource/budget constraints.
The paper introduces two operational metrics: (1) value of deception (change in defender equilibrium utility attributable to deception relative to baseline) and (2) price of transparency (marginal loss in deception value induced by increased observability).
Formal definitions and mathematical expressions in the theoretical model section of the paper (analytical definitions/proofs).
The paper provides a principled, game-theoretic framework to measure and compare the operational value of cyber deception relative to a matched non-deceptive baseline.
Analytical/modeling contribution: paired strategic-game construction (baseline vs deception) with formal definitions and equilibrium analysis presented in the paper (theoretical derivation/proofs).
Policy recommendations include: invest in open metadata standards; fund pilot programs to evaluate ROI (earnings, placement, employer satisfaction); require model governance and periodic external audits for AI-assisted curriculum tools; and support smaller providers via shared infrastructure or accreditation hubs.
Explicit policy recommendations in paper (prescriptive).
Careful attention is needed to validity/reliability of assessments and to selection bias in employment outcome measurement.
Paper's methodological caveat (prescriptive); no empirical bias analysis provided.
Suggested evaluation metrics include placement rates, wage premiums, competency attainment, compliance scores, cost per qualification, and update latency.
Paper's recommended evaluation metrics (prescriptive).
Implementation requires integration with information systems for documentation, versioning, metadata, and audit trails, and benefits from continuous monitoring dashboards.
Paper's technical implementation recommendations (prescriptive).
Recommended analysis methods are qualitative (semi-structured interviews, focus groups, document review) and quantitative (surveys, competency mapping, statistical analysis of outcomes), plus systematic audit methods including traceability checks.
Paper's methods section (methodological specification).
Data inputs for the framework should include competency taxonomies, labor-market signals, regulatory requirements, learner assessment results, and stakeholder interviews.
Paper's data-input specification (descriptive).
Management principles emphasised are transparency, traceability of outcomes, IT integration for documentation, and continuous monitoring/evaluation.
Explicit management principles in paper (prescriptive).
Research and audit should emphasise validity, reliability, and compliance using mixed methods (qualitative interviews/focus groups; quantitative surveys/statistics) and systematic curriculum audits.
Recommended research & audit approach in paper (methodological guidance).
Tools recommended include logigrams (visual decision/compliance flows) and algorigram (algorithmic step-flows for planning, assessment, audit).
Tool definitions and recommendations in paper (descriptive).
Core components of the framework are inputs (learner needs, industry requirements, regulatory standards), processes (curriculum mapping, competency alignment, career assessment), and outputs (structured lesson plans, compliance-ready frameworks, career-path documentation).
Framework component list provided in paper (descriptive).
Scope of the program includes curriculum design, organisational management, career-alignment, and audit/compliance processes.
Explicit scope statement in paper (descriptive).
The framework foregrounds logical modelling (logigrams, algorigrams) and mixed-methods data analysis to support design, auditability, and alignment with industry and regulatory standards.
Paper's methodological design and tool recommendations (conceptual). No empirical implementation data reported.
The program offers a comprehensive curriculum-engineering framework linking organizational orientation, management systems, lesson planning, and career assessment into traceable, compliance-ready curriculum products.
Paper's program description and framework specification (conceptual); no empirical evaluation or sample size reported.
Few longitudinal or randomized studies were found, which limits the evidence base for causal claims about digital transformation's effect on productivity.
Review recorded a limited number of longitudinal analyses and quasi-experimental designs among the 145 studies; randomized studies were scarce or absent.
Measurement heterogeneity across studies includes self-reported productivity, output-per-worker metrics, and process efficiency indicators.
Extraction of productivity indicators from included studies (detailed in Methods/Extraction fields) showed multiple distinct measurement approaches.
There is a lack of standardized instruments and inconsistent controls for confounding factors across studies, limiting causal inference about the effect of digital transformation on productivity.
Review extraction documented varied instruments/measures and inconsistent adjustment for confounders across the included studies; few randomized or robust longitudinal designs were found.
Heterogeneous definitions of 'digital transformation' and a variety of productivity measurement approaches prevented a formal quantitative meta-analysis.
Extraction found wide variation in how digital transformation and productivity were defined and measured across the 145 studies (self-reported productivity, output per worker, process efficiency metrics, etc.), leading authors to forgo meta-analysis.
535 records were identified across Scopus, Web of Science, ScienceDirect, IEEE Xplore, and Google Scholar, of which 145 met PRISMA 2020 inclusion criteria.
Search and screening procedure documented in the review: initial database searches yielded 535 records → duplicates removed → screening → full-text evaluation → 145 included studies.
Non-probability sampling and self-reported measures limit claims about prevalence and causality; cross-sectional design cannot capture dynamics of skill acquisition over time.
Study limitations explicitly reported by authors: non-probability sampling, self-reported measures, and cross-sectional design.
There are few large-scale randomized controlled trials (RCTs) showing direct patient outcome improvements from GenAI CDS; high-quality real-world and longitudinal studies are limited but essential.
Evidence-maturity statement in the paper summarizing the literature; the paper explicitly notes scarcity of large RCTs and longitudinal evaluations.
The paper's empirical scope is primarily conceptual/theoretical and literature‑based rather than an empirical case study or large‑scale data experiment; it emphasizes the need for future empirical validation.
Explicit methodological description within the paper stating reliance on literature review and conceptual development; absence of empirical sample or case study.
Randomized or quasi-experimental evaluations of digital-ID rollouts, subsidy programs for fintech adoption, or sandboxed regulatory innovations can identify causal impacts on inclusion and growth.
Methodological recommendation proposing experimental and quasi-experimental designs to obtain causal inference; no implementation results reported in the paper summary.
AI economists should prioritize measuring how AI-driven services affect access, default rates, transaction costs, and market structure, disaggregated across income groups and regions.
Methodological recommendation in the 'Implications for AI Economics' section; suggested measurement priorities rather than an empirical finding.
There is a need for economic analysis of data governance regimes, model transparency requirements, algorithmic auditability, and incentives for responsible AI adoption in finance.
Methodological and policy recommendation based on identified gaps in the literature and regulatory practice; this is a stated research/policy need in the paper rather than an empirical claim requiring sample evidence.
Typical evaluation metrics reported are accuracy, precision, recall, F1-score, AUC, detection rate, false positive rate, latency, and computational cost.
Survey of evaluation practices in reviewed papers listing the metrics authors commonly report.
Emerging approaches in the literature include federated learning, online/streaming learning, and transfer learning for cross-device generalization.
Trend analysis across recent papers indicating adoption of federated and continual learning paradigms and transfer-learning techniques.
Unsupervised and semi-supervised methods (clustering, one-class classifiers, autoencoder-based anomaly detectors) are commonly employed to handle unlabeled/anomalous IoT traffic.
Synthesis of studies using anomaly-detection paradigms and unsupervised techniques reported in the reviewed papers.
Deep learning approaches used include CNNs, RNNs/LSTMs for sequence/traffic analysis, and autoencoders for anomaly detection.
Surveyed literature and taxonomy noting multiple studies that apply convolutional and recurrent architectures and autoencoders to network/traffic data.
Common ML approaches reported for IoT IDS include supervised models (random forest, SVM, gradient boosting, neural networks).
Taxonomy and literature synthesis showing frequent use of classical supervised classifiers in surveyed papers and experiments.
Empirical research suggestion: recommended outcome variables for future empirical work include productivity (TFP), profitability, exports, employment composition, and process innovation rates; explanatory variables include AI adoption intensity, strategic alignment indices, leadership commitment surveys, sensing activities, and institutional support measures.
Explicit research agenda and measurement suggestions provided in the paper based on the framework and gaps identified in the 72‑article review.
Scope & limits: the paper is a literature synthesis (no new primary empirical data), has a geographical emphasis on Ibero‑America, and covers literature up to 2024 (may omit post‑2024 developments).
Explicit limitations and scope noted in the paper (no primary data; regional emphasis; time window).
Methodological approach: the paper uses a structured narrative literature review following Torraco (2016) and Juntunen & Lehenkari (2021), analyzing a corpus of 72 articles from 2015–2024 via thematic synthesis and systematic coding.
Explicit methodological statement in the paper specifying approach, corpus size (72 articles), time window (2015–2024), and analytic techniques (thematic synthesis and coding).
The framework yields eight empirically testable propositions linking capability development to firm outcomes (the paper explicitly lists eight propositions including P1–P3 and five additional linked propositions).
Explicit claim in the reviewed paper: framework includes eight testable propositions; propositions are theoretical and untested empirically within the paper.
This work is a conceptual framework and design proposal synthesizing methods from recommender systems and HRI rather than a report of novel empirical experiments.
Explicit statement in the Data & Methods section of the paper.
The review followed PRISMA guidelines and included 30 scholarly articles retrieved from Scopus, published between 2020 and 2025, selected using pre-specified inclusion criteria.
Methods section of the paper reporting the SLR protocol, database, time window, and number of included studies.
The study is primarily diagnostic and prescriptive rather than empirical: no explicit empirical dataset, causal identification strategy, or statistical estimation is reported.
Methods section of the paper explicitly characterizes the work as conceptual, systems-oriented, and not reporting empirical evaluation data.
The urban AI index is constructed via text-mining techniques to capture city-level AI capability/intensity.
Methodological description: authors report using text-mining to build a city-level AI capability/intensity index (details of sources and text-mining procedure not provided in the summary).