Evidence (7395 claims)
Adoption
7395 claims
Productivity
6507 claims
Governance
5921 claims
Human-AI Collaboration
5192 claims
Org Design
3497 claims
Innovation
3492 claims
Labor Markets
3231 claims
Skills & Training
2608 claims
Inequality
1842 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 609 | 159 | 77 | 738 | 1617 |
| Governance & Regulation | 671 | 334 | 160 | 99 | 1285 |
| Organizational Efficiency | 626 | 147 | 105 | 70 | 955 |
| Technology Adoption Rate | 502 | 176 | 98 | 78 | 861 |
| Research Productivity | 349 | 109 | 48 | 322 | 838 |
| Output Quality | 391 | 121 | 45 | 40 | 597 |
| Firm Productivity | 385 | 46 | 85 | 17 | 539 |
| Decision Quality | 277 | 145 | 63 | 34 | 526 |
| AI Safety & Ethics | 189 | 244 | 59 | 30 | 526 |
| Market Structure | 152 | 154 | 109 | 20 | 440 |
| Task Allocation | 158 | 50 | 56 | 26 | 295 |
| Innovation Output | 178 | 23 | 38 | 17 | 257 |
| Skill Acquisition | 137 | 52 | 50 | 13 | 252 |
| Fiscal & Macroeconomic | 120 | 64 | 38 | 23 | 252 |
| Employment Level | 93 | 46 | 96 | 12 | 249 |
| Firm Revenue | 130 | 43 | 26 | 3 | 202 |
| Consumer Welfare | 99 | 51 | 40 | 11 | 201 |
| Inequality Measures | 36 | 106 | 40 | 6 | 188 |
| Task Completion Time | 134 | 18 | 6 | 5 | 163 |
| Worker Satisfaction | 79 | 54 | 16 | 11 | 160 |
| Error Rate | 64 | 79 | 8 | 1 | 152 |
| Regulatory Compliance | 69 | 66 | 14 | 3 | 152 |
| Training Effectiveness | 82 | 16 | 13 | 18 | 131 |
| Wages & Compensation | 70 | 25 | 22 | 6 | 123 |
| Team Performance | 74 | 16 | 21 | 9 | 121 |
| Automation Exposure | 41 | 48 | 19 | 9 | 120 |
| Job Displacement | 11 | 71 | 16 | 1 | 99 |
| Developer Productivity | 71 | 14 | 9 | 3 | 98 |
| Hiring & Recruitment | 49 | 7 | 8 | 3 | 67 |
| Social Protection | 26 | 14 | 8 | 2 | 50 |
| Creative Output | 26 | 14 | 6 | 2 | 49 |
| Skill Obsolescence | 5 | 37 | 5 | 1 | 48 |
| Labor Share of Income | 12 | 13 | 12 | — | 37 |
| Worker Turnover | 11 | 12 | — | 3 | 26 |
| Industry | — | — | — | 1 | 1 |
Adoption
Remove filter
AI adoption alleviates financing constraints, and this channel contributes to higher executive compensation.
Mediation/mechanism tests in the paper showing AI adoption is associated with reduced financing constraints, and reduced financing constraints are associated with higher executive pay (mediation analysis on the A-share firm panel).
The rapid rise of AI-enhanced robotics since the 2010s signals a shift toward increased embedding of AI into hardware systems, accelerating cross-sector spillovers.
Interpretation based on observed acceleration in AI-enhanced robotics patents (patent filings 1980–2019) and the convergence patterns reported in the paper. This is an inference drawn from patenting trends rather than a directly measured measure of cross-sector spillovers.
Nonlinear adoption/diffusion models that allow for thresholds, complementarities, and endogenous firm investment responses will better capture tipping points and adoption dynamics than linear models.
Modeling proposal arguing theoretical need for nonlinear specifications and endogenous adoption; no empirical fit comparisons or simulated sample evidence are presented in the paper.
Estimating micro-level gross flows at occupation × industry × geography × demographic granularity (and at higher frequency) will better capture transitions such as reemployment paths, upskilling, and churn.
Proposal to use CPS, LEHD/LODES, JOLTS, administrative unemployment records and firm panels to estimate high-resolution flows. No empirical estimates or sample-size specifics provided.
Nowcasting and real-time analytics (including LLM re-scoring and streaming signals like job postings/platform activity) can update OAIES and short-term projections to improve monitoring.
Proposal to ingest real-time/near-real-time inputs (job-posting APIs, platform data, administrative records) and re-score tasks via LLM embeddings. No implemented nowcast results or sample-based evaluation are presented.
Incorporating causal identification methods (DiD, event-study, synthetic controls, IV) with task-based exposure will yield more credible causal estimates of AI’s effects on employment, wages, and mobility than correlational risk scores.
Methodological claim supported by standard econometric approaches proposed for use with the OAIES and staggered adoption/panel data. No empirical demonstration is provided; evidence is methodological rationale.
Crises (pandemics, supply shocks) tend to accelerate digital and AI adoption, potentially shortening adjustment time to new technological regimes.
Interpretation of recent historical episodes (e.g., COVID-19) and diffusion literature; qualitative assertion without presented microeconometric identification.
AI and the green transformation function as modern long-wave drivers by improving operational efficiency, enabling new products and services, and reorganizing competitive hierarchies.
Conceptual argument linking general-purpose technology literature to observed/anticipated capabilities of AI and green tech; literature synthesis without original empirical tests.
Schumpeterian cycles are driven by clusters of technological innovations and entrepreneurial activity; AI and green technologies represent contemporary innovation clusters with strong potential for productive disruption.
Application of Schumpeterian theory to contemporary technology trends via literature synthesis and conceptual argument (no empirical quantification provided).
Integrating lived temporality into design and evaluation is necessary to preserve and enhance the qualitative aspects of human life in transhumanist transformation.
Normative/philosophical argument supported by literature synthesis and conceptual reasoning; no empirical demonstration (N/A).
AI/ML methods can reduce reliance on animal models by simulating biology, optimizing experiments, and prioritizing candidate drugs—supporting the 3Rs (Replacement, Reduction, Refinement)—but this is contingent on rigorous validation and ethical oversight.
Conceptual and methodological arguments (Manju V et al.) and cited examples of validated in silico alternatives and experiment‑optimization workflows; no single trial or sample size—recommendation based on synthesis of studies and caveats about validation and regulation.
CDRG‑RSF identified five prognostic genes including UBASH3B, which is associated with reduced NK activation and may mediate drug resistance—making it a candidate therapeutic target.
Feature selection within the CDRG‑RSF model yielded five prognostic genes; UBASH3B shown to correlate with immune suppression (reduced NK activation) and inferred links to drug resistance (associational analyses; functional validation not specified in summary).
PIGRS prognostic model (LASSO + Gradient Boosting Machine ensemble using 15 programmed‑cell‑death immune genes) outperformed most published LUAD prognostic models.
Prognostic modeling using LASSO feature selection followed by GBM ensemble on a 15‑gene panel; comparative benchmarking against published LUAD prognostic models reported superior performance (metrics and external cohort testing referenced).
Multi‑omics integration and consensus clustering (10 methods) in lung adenocarcinoma (LUAD) identified three molecular subtypes (CS1–CS3) with distinct prognoses.
PIGRS study integrated transcriptome, DNA methylation, and somatic mutation data and applied ten clustering algorithms to define molecular subtypes; reported three subtypes with differing survival outcomes (external validation cohorts used).
Data augmentation with Gaussian noise improved DNN performance for small sample cross‑omics training sets.
Cross‑omics study applied Gaussian noise augmentation during DNN training on small paired viral datasets and observed improved model performance and DEA recovery relative to non‑augmented training.
Dynamic Ensemble Selection‑Performance (DES‑P) produced parsimonious, high‑accuracy classifiers within the EPheClass pipeline.
Use of DES‑P for model selection in EPheClass reportedly yielded small, high‑performing ensembles (example: periodontal disease AUC = 0.973 with 13 features).
Applying centred log‑ratio (CLR) transformation and RFE to compositional microbiome data improves model parsimony and supports reproducibility in diagnostic classifiers.
EPheClass preprocessing: CLR to handle compositional 16S data and RFE to reduce feature sets; resulted in small feature panels (e.g., 13 features) with high performance and emphasis on rigorous validation to avoid prior overfitting issues.
The same EPheClass approach produced successful parsimonious classifiers for IBD (26 features) and antibiotic exposure (22 features).
EPheClass applied to additional microbiome outcomes (IBD and antibiotic exposure) with RFE selecting 26 and 22 features respectively; performance described as 'successful' (exact AUCs not provided in summary).
The value-of-deception metric can be used to monetize the benefit of deception technologies relative to non-deceptive alternatives, supporting investment and cost–benefit comparisons.
Conceptual/analytical proposal in the implications section: metric defined in utility units and argued to be interpretable for economic valuation (no empirical monetary valuation provided).
There exist parameter regimes where simple allocation heuristics nearly match optimal allocations (heuristics are practically sufficient in some regimes).
Combination of analytical approximation guarantees and simulation results comparing heuristic performance to computed optima (analytical proofs plus simulated evaluations; sizes/instances not specified).
Computational experiments across heterogeneous simulated scenarios produce consistent cross-setting comparability of the proposed metrics (value of deception and price of transparency).
Simulated computational experiments sweeping parameters (decoy realism, budget, attacker rationality, observability); results reported showing comparability across scenarios (simulations; sample sizes/number of scenarios not specified).
The framework’s emphasis on traceability and IT integration creates rich datasets suitable for econometric evaluation (causal impact on earnings, placement) and for training ML models (curriculum recommendation, skill-gap prediction).
Argument in paper about secondary uses of integrated data (conceptual); no datasets or empirical model training described.
Modelling artefacts (flowcharts/logigrams and algorigrams) can encode repeatable lesson-planning, assessment and audit algorithms.
Paper's modelling artefacts description (conceptual/tools).
Policy guidance should target pairing AI diffusion with training, management practices, and organizational reforms to maximize social returns, and evaluations should assess both short-run costs and longer-run productivity trajectories.
Synthesis of evidence that complementarities and contextual factors matter, combined with identified gaps in causal and longitudinal evidence, led to this policy recommendation in the review.
Empirical evidence highlights strong complementarities between AI technologies and human capital (digital skills), organizational practices, and management—models should incorporate these complementarities.
Multiple included studies reported interaction/moderation effects showing higher productivity when AI adoption co-occurs with higher digital skills or supportive management practices; synthesized recommendation follows from findings.
Many digital transformation studies implicate AI and automation as key drivers of observed productivity gains, conditional on complementary factors.
Synthesis of included studies where AI/automation was identified as a contributing technological component correlated with productivity improvements; review notes these effects are conditional on complements like skills and management.
Digital transformation components most consistently tied to productivity gains are technological integration (including automation/AI), process digitization, employee digital skills/training, and analytics/data-driven decision-making.
Synthesis of components extracted from included studies where reported associations between specific digital transformation elements and productivity outcomes were noted across multiple studies.
GenAI models enable personalization (tailored care pathways and risk predictions) by integrating multimodal data (notes, imaging, labs).
Technical capability demonstrated in model development literature and small-scale studies using multimodal inputs; the paper notes limited real-world longitudinal evidence of clinical outcome improvements from such personalization.
GenAI CDS can extend access to expertise in low-resource settings by supporting non-specialists or overburdened clinicians.
The paper cites the potential based on the capability of decision-support systems and early pilot evaluations; empirical real-world evidence and large-scale trials in low-resource settings are limited or not cited.
GenAI CDS can save clinician time (faster charting, literature summarization, guideline retrieval), potentially increasing capacity and access.
Reported process findings from early studies and human-AI interaction evaluations (qualitative and quantitative) and retrospective workflow analyses; specific sample sizes and effect magnitudes are not provided in the paper.
Generative AI clinical decision support (GenAI CDS) can improve diagnostic and treatment suggestions through synthesis of patient data and medical knowledge, reducing missed diagnoses and standardizing care where evidence is clear.
Early evaluations reported in the paper: controlled tasks, simulated patient vignettes, retrospective validation comparing model outputs to historical chart-verified diagnoses or guideline-concordant actions; no large-scale RCTs cited and sample sizes for cited studies are not specified in the paper.
Researchers should develop benchmark datasets and validated simulation testbeds (industry‑anonymized) to enable reproducible economic analysis.
Explicit research recommendation in the paper's implications and research agenda section.
Simulations that incorporate government policy constraints can inform industrial policy, subsidies, regulation aimed at supply‑chain resilience, and quantify environmental externalities relevant to circular economy measures.
Policy‑relevance arguments and recommendations in the paper; conceptual claim without empirical policy evaluation.
Digital twins and real‑time analytics can make simulations dynamic, enabling economic evaluation of shock scenarios and policy interventions.
Conceptual argument and forward‑looking recommendations in the paper; no empirical test of digital twin implementations provided.
AI/ML methods (including reinforcement learning, optimization, and causal methods) can be used to calibrate and validate simulation models against firm‑level and operational data.
Recommendations and discussion in the paper's implications section; conceptual suggestion rather than demonstrated implementation.
Integration should start from the outsourcing decision: outsourcing choices are treated as a primary lever for supply‑chain integration and closed‑loop operations.
Argument and framing in the paper's conceptual framework and roadmap; based on literature synthesis rather than empirical estimation.
Policy emphasis should include digital literacy, interoperable standards, data protection, and mechanisms to prevent new forms of exclusion or systemic concentration.
Policy prescription drawn from cross-case lessons and literature review; suggested best-practices rather than empirically tested interventions in the abstract.
Realizing the benefits of digital financial ecosystems at scale requires coordinated action by governments, financial institutions, fintechs, and regulators, plus stronger digital infrastructure, adaptive regulation, and innovation-driven strategies.
Prescriptive synthesis based on cross-case lessons and policy reviews; recommends multi-stakeholder coordination as necessary but provides no experimental evidence in the abstract to prove causality.
Digital financial ecosystems lower transaction costs and expand reach to underserved populations by delivering comprehensive services within unified digital environments.
Descriptive analysis of indicators (digital payment volumes, account ownership) and literature synthesis; illustrative case studies referenced but no detailed numerical sample or causal estimates in the abstract.
AI-enabled components (automated credit scoring, fraud detection, personalization) are central to efficiency gains and expanded inclusion within digital financial ecosystems.
Literature and policy review highlighting firm/platform examples and mechanisms (automated scoring, fraud detection); likely draws on firm-level examples though no microdata or sample sizes are specified in the abstract.
Digital financial ecosystems materially improve operational efficiency for financial service providers.
Descriptive analysis and literature synthesis referencing efficiency gains (lower transaction costs, automation); comparative case examples of platform integration. No explicit econometric estimates or sample sizes reported in the abstract.
Digital financial ecosystems materially improve financial accessibility in emerging and developing economies.
Synthesis of contemporary literature and policy reviews; comparative case studies of national/regional digital ecosystem implementations; descriptive indicators cited (e.g., increased account ownership, digital payment volumes). The paper does not provide a detailed empirical sample or causal identification strategy.
To capture economic value, companies must close the research-to-product gap by investing in end-to-end pipelines (data ops, monitoring, compressed models, privacy-preserving architectures).
Survey synthesis of technical and operational gaps indicating that end-to-end engineering is required for commercial success; recommendations for investors and firms.
Incorporating adversarial robustness testing, continual learning for concept drift, and explainability will improve incident response and model longevity.
Survey recommendations grounded in identified threats (adversarial attacks, drift) and operational needs (explainability for incident response) discussed in the literature.
Adopting hybrid detection (signature + anomaly) and multi-stage pipelines can reduce false positives and improve practical detection performance.
Survey recommendation based on examples and comparative analyses where multi-stage/hybrid pipelines improved some operational metrics in reported studies.
Using lightweight models or model-compression techniques (quantization, pruning, knowledge distillation) is recommended to enable edge deployment.
Recommendation in the survey informed by resource-constraint findings and by papers that evaluate compressed/lightweight models for edge inference.
Privacy concerns around sensitive telemetry motivate privacy-preserving approaches (e.g., federated learning, differential privacy) for training IDS without centralizing raw data.
Discussion across papers and recommendations in the survey advocating for federated/privacy-preserving methods due to data sensitivity and regulation.
Machine-learning–based intrusion detection systems (ML-IDS) are a promising solution for IoT because they can detect complex, evolving attacks that signature-based systems miss.
Synthesis of recent ML-based IoT IDS literature reviewed in the survey noting ML methods' ability to learn patterns and adapt to new threats; comparative analyses of reported detection capability across studies.
Practical SME guidance: low‑cost tactics (start with high‑value small pilots, build leadership buy‑in, form partnerships to build sensing, and use intermediaries to bridge institutional gaps) increase the chance of successful AI adoption for resource‑constrained SMEs.
Actionable guidance distilled from recurring recommendations across the literature corpus and the proposed framework; presented as practitioner implications rather than empirically validated recipes.
Policy implication: reducing coordination costs (via institutional bridging), subsidizing sensing and pilot projects, and providing leadership/managerial training can raise AI adoption and the returns to AI among SMEs.
Policy recommendations derived from the conceptual framework and literature synthesis across the 72‑article corpus; presented as implications rather than empirically tested interventions.