Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
Regression-based and other supervised learning approaches remain dominant.
Aggregated reporting from the 42-study review showing a prevalence of regression and supervised ML methods in the literature sample.
The reviewed studies rely on feature-engineered sentiment indices derived from lexicons or sentence-level classification.
Review synthesis noting frequent use of lexicon-based sentiment scoring and sentence-level classification to produce engineered sentiment features across the sampled studies.
Most studies focus on the U.S. stock market.
Findings from the review of 42 studies indicating a majority of the reviewed works concentrate on U.S. markets (geographic coding/synthesis across studies reported by the authors).
Machine learning methods have been widely used to predict stock prices using technical indicators and sentiment features, mostly extracted from social media and news.
Systematic review of the literature summarized in the paper (corpus of 42 studies published 2014–2025) reporting that many reviewed studies use ML to predict stock prices and that sentiment inputs commonly come from social media and news sources.
Visa recapture would reclaim approximately 339,000 unused visas from prior years, delivering immediate backlog relief under existing statutory authority.
Authors' calculation/estimate of cumulative unused employment-based visas available for recapture (presumably based on historical visa usage statistics from the Department of State); the excerpt does not show the year-by-year accounting or the assumptions used to reach 339,000.
Dependent exemption (excluding spouses and minor children from counting toward the annual cap) would ensure that all 140,000 visas are allocated to independently qualified principal workers rather than divided among family members.
Policy design claim; premise depends on current family-derivative usage of the cap and would require counting statistics (number of visas currently used by dependents) to quantify effect—those counts are not provided in the excerpt.
Increasing the annual employment-based visa ceiling would alleviate the overall shortage that persists regardless of allocation methods.
Logical/policy claim that raising the statutory cap increases supply; the excerpt does not include a quantitative elasticity, model, or simulation showing the required increase or magnitude of backlog reduction.
Phasing out the seven-percent per-country cap would gradually transition visa allocation from nationality-based limits to a demand-driven system, allowing applicants from high-demand countries to advance in the backlog without causing abrupt increases in wait times for those from low-demand countries.
Policy proposal with implied simulation/modeling rationale (demand-driven allocation); the excerpt does not provide a formal model, simulation parameters, or empirical test showing the gradual, non-disruptive transition.
This study extends the technology–organisation–environment (TOE) theory by providing comprehensive empirical evidence of internal and external factors affecting BT adoption.
Use of the TOE framework to structure empirical analysis on 27,400 firm-year observations (2013–2021) linking technology (AI), organisation (corporate culture), and environment (market competition, government support, digital financial development) variables to BT adoption outcomes.
Environmental factors—market competition, government support, and the level of digital financial development across provinces—positively affect BT adoption.
Empirical tests using the 27,400 firm-year sample (2013–2021) incorporating provincial- and market-level environmental variables (market competition, measures of government support, and provincial digital financial development indices) alongside firm-level data and BT adoption coding from annual reports.
Externally oriented corporate cultures, specifically competition-oriented and creation-oriented cultures, positively affect BT adoption.
Same sample of 27,400 firm-year observations (2013–2021). Corporate culture indicators (competition- and creation-orientation) collected via Python web crawler from the management discussion & analysis (MD&A) sections of annual reports; BT adoption measured by manual annual report keyword search and content validation.
AI technology positively affects blockchain technology (BT) adoption.
Empirical analysis of 27,400 firm-year observations of Chinese A-share listed firms (2013–2021). AI technology measured using AI patent data collected via a Python web crawler from annual report MD&A sections and China National Knowledge Infrastructure (CNKI). BT adoption identified by manual search of annual reports for the keyword 'blockchain technology' and content assessment to confirm adoption status.
To alleviate adverse spatial spillovers, it is necessary to strengthen interactive development between digital–real integration and New Quality Productive Forces, foster interregional cooperation, and optimize resource allocation.
Policy recommendations derived from the paper's empirical findings (bidirectional positive relationship and negative spatial spillovers) — normative conclusion based on observed results.
The promotional effect of digital–real integration on New Quality Productive Forces is slightly stronger than the reverse effect (New Quality Productive Forces on digital–real integration).
Comparison of estimated coefficients from the GS3SLS spatial simultaneous equations model (paper reports the coefficient for integration→productive-forces is marginally larger than productive-forces→integration).
Cost–benefit analyses in AI economics should internalize long-term, hard-to-quantify harms (autonomy loss, social trust erosion) rather than rely solely on market price signals.
Normative critique of standard welfare analysis with literature support from ethics and political philosophy; no empirical recalculation of cost–benefit models provided.
Investing in privacy-preserving AI methods (differential privacy, federated learning, synthetic data) and governance institutions is warranted as an alternative to atomized data markets.
Policy and technical recommendation based on literature on privacy-preserving techniques and governance models; paper does not present original technical evaluations or cost–benefit analyses.
Economists modeling AI markets should incorporate non-pecuniary harms, externalities, and moral constraints when assessing welfare, innovation trade-offs, and optimal policy.
Normative recommendation grounded in philosophical argument and critique of standard welfare frameworks; not supported by empirical methodological comparison in the paper.
The paper's conceptual contribution challenges macro-centric crisis narratives by centering social mechanisms (support systems, peer benchmarking, institutional trust) as critical determinants of small-firm adaptation.
Theoretical framing (novel socially embedded analytical lens) combined with empirical results showing the importance of networks, identities, and normative motivations in explaining adaptation outcomes relative to macro-structural explanations.
The rapid rise of AI-enhanced robotics since the 2010s signals a shift toward increased embedding of AI into hardware systems, accelerating cross-sector spillovers.
Interpretation based on observed acceleration in AI-enhanced robotics patents (patent filings 1980–2019) and the convergence patterns reported in the paper. This is an inference drawn from patenting trends rather than a directly measured measure of cross-sector spillovers.
Crises (pandemics, supply shocks) tend to accelerate digital and AI adoption, potentially shortening adjustment time to new technological regimes.
Interpretation of recent historical episodes (e.g., COVID-19) and diffusion literature; qualitative assertion without presented microeconometric identification.
AI and the green transformation function as modern long-wave drivers by improving operational efficiency, enabling new products and services, and reorganizing competitive hierarchies.
Conceptual argument linking general-purpose technology literature to observed/anticipated capabilities of AI and green tech; literature synthesis without original empirical tests.
Schumpeterian cycles are driven by clusters of technological innovations and entrepreneurial activity; AI and green technologies represent contemporary innovation clusters with strong potential for productive disruption.
Application of Schumpeterian theory to contemporary technology trends via literature synthesis and conceptual argument (no empirical quantification provided).
Integrating lived temporality into design and evaluation is necessary to preserve and enhance the qualitative aspects of human life in transhumanist transformation.
Normative/philosophical argument supported by literature synthesis and conceptual reasoning; no empirical demonstration (N/A).
AI/ML methods can reduce reliance on animal models by simulating biology, optimizing experiments, and prioritizing candidate drugs—supporting the 3Rs (Replacement, Reduction, Refinement)—but this is contingent on rigorous validation and ethical oversight.
Conceptual and methodological arguments (Manju V et al.) and cited examples of validated in silico alternatives and experiment‑optimization workflows; no single trial or sample size—recommendation based on synthesis of studies and caveats about validation and regulation.
CDRG‑RSF identified five prognostic genes including UBASH3B, which is associated with reduced NK activation and may mediate drug resistance—making it a candidate therapeutic target.
Feature selection within the CDRG‑RSF model yielded five prognostic genes; UBASH3B shown to correlate with immune suppression (reduced NK activation) and inferred links to drug resistance (associational analyses; functional validation not specified in summary).
PIGRS prognostic model (LASSO + Gradient Boosting Machine ensemble using 15 programmed‑cell‑death immune genes) outperformed most published LUAD prognostic models.
Prognostic modeling using LASSO feature selection followed by GBM ensemble on a 15‑gene panel; comparative benchmarking against published LUAD prognostic models reported superior performance (metrics and external cohort testing referenced).
Multi‑omics integration and consensus clustering (10 methods) in lung adenocarcinoma (LUAD) identified three molecular subtypes (CS1–CS3) with distinct prognoses.
PIGRS study integrated transcriptome, DNA methylation, and somatic mutation data and applied ten clustering algorithms to define molecular subtypes; reported three subtypes with differing survival outcomes (external validation cohorts used).
Data augmentation with Gaussian noise improved DNN performance for small sample cross‑omics training sets.
Cross‑omics study applied Gaussian noise augmentation during DNN training on small paired viral datasets and observed improved model performance and DEA recovery relative to non‑augmented training.
Dynamic Ensemble Selection‑Performance (DES‑P) produced parsimonious, high‑accuracy classifiers within the EPheClass pipeline.
Use of DES‑P for model selection in EPheClass reportedly yielded small, high‑performing ensembles (example: periodontal disease AUC = 0.973 with 13 features).
Applying centred log‑ratio (CLR) transformation and RFE to compositional microbiome data improves model parsimony and supports reproducibility in diagnostic classifiers.
EPheClass preprocessing: CLR to handle compositional 16S data and RFE to reduce feature sets; resulted in small feature panels (e.g., 13 features) with high performance and emphasis on rigorous validation to avoid prior overfitting issues.
The same EPheClass approach produced successful parsimonious classifiers for IBD (26 features) and antibiotic exposure (22 features).
EPheClass applied to additional microbiome outcomes (IBD and antibiotic exposure) with RFE selecting 26 and 22 features respectively; performance described as 'successful' (exact AUCs not provided in summary).
Firms and hospitals need differentiated investment and governance strategies by interaction level: integration and workflow redesign for AI-assisted; training and decision-support protocols for AI-augmented; process redesign, liability allocation, and oversight for AI-automated systems.
Prescriptive recommendations derived from cross-case findings (n=4) and the conceptual mapping to innovation management implications.
Different interaction levels produce heterogeneous productivity gains (throughput increases, faster/safer decisions, process cost reductions); economic evaluation should be level-specific.
Theoretical/generalization drawn from observed effects across the four qualitative cases and conceptual analysis linking interaction level to types of productivity gains.
Adoption of healthcare AI is better framed as an evolution toward 'Human+' professionals (complementarity) rather than wholesale replacement of clinicians.
Cross-case interpretive analysis of the four qualitative case studies and theoretical framing with Bolton et al. (2018); presented as the paper's core insight.
AI-automated solutions streamline end-to-end processes (e.g., automated reporting pipelines) while keeping humans in supervisory/exception roles, producing process reconfiguration and efficiency gains and shifting roles toward exception management and governance.
Observed characteristics of the AI-automated case(s) in the qualitative multiple case study (n=4) and synthesized in cross-case comparison.
AI-assisted applications automate highly repetitive tasks (e.g., triage routing, routine image preprocessing), producing increased service availability and throughput while freeing clinician time but requiring oversight and workflow integration.
Empirical observations from one or more of the four qualitative case studies illustrating AI-assisted use-cases; interpreted via the Bolton et al. framework and cross-case comparison.
Researchers should develop benchmark datasets and validated simulation testbeds (industry‑anonymized) to enable reproducible economic analysis.
Explicit research recommendation in the paper's implications and research agenda section.
Simulations that incorporate government policy constraints can inform industrial policy, subsidies, regulation aimed at supply‑chain resilience, and quantify environmental externalities relevant to circular economy measures.
Policy‑relevance arguments and recommendations in the paper; conceptual claim without empirical policy evaluation.
Digital twins and real‑time analytics can make simulations dynamic, enabling economic evaluation of shock scenarios and policy interventions.
Conceptual argument and forward‑looking recommendations in the paper; no empirical test of digital twin implementations provided.
AI/ML methods (including reinforcement learning, optimization, and causal methods) can be used to calibrate and validate simulation models against firm‑level and operational data.
Recommendations and discussion in the paper's implications section; conceptual suggestion rather than demonstrated implementation.
Integration should start from the outsourcing decision: outsourcing choices are treated as a primary lever for supply‑chain integration and closed‑loop operations.
Argument and framing in the paper's conceptual framework and roadmap; based on literature synthesis rather than empirical estimation.
To capture economic value, companies must close the research-to-product gap by investing in end-to-end pipelines (data ops, monitoring, compressed models, privacy-preserving architectures).
Survey synthesis of technical and operational gaps indicating that end-to-end engineering is required for commercial success; recommendations for investors and firms.
Incorporating adversarial robustness testing, continual learning for concept drift, and explainability will improve incident response and model longevity.
Survey recommendations grounded in identified threats (adversarial attacks, drift) and operational needs (explainability for incident response) discussed in the literature.
Adopting hybrid detection (signature + anomaly) and multi-stage pipelines can reduce false positives and improve practical detection performance.
Survey recommendation based on examples and comparative analyses where multi-stage/hybrid pipelines improved some operational metrics in reported studies.
Using lightweight models or model-compression techniques (quantization, pruning, knowledge distillation) is recommended to enable edge deployment.
Recommendation in the survey informed by resource-constraint findings and by papers that evaluate compressed/lightweight models for edge inference.
Privacy concerns around sensitive telemetry motivate privacy-preserving approaches (e.g., federated learning, differential privacy) for training IDS without centralizing raw data.
Discussion across papers and recommendations in the survey advocating for federated/privacy-preserving methods due to data sensitivity and regulation.
Machine-learning–based intrusion detection systems (ML-IDS) are a promising solution for IoT because they can detect complex, evolving attacks that signature-based systems miss.
Synthesis of recent ML-based IoT IDS literature reviewed in the survey noting ML methods' ability to learn patterns and adapt to new threats; comparative analyses of reported detection capability across studies.
Policy levers such as privacy-preserving markets for personalization data (data trusts, opt-in marketplaces) and regulation of algorithmic constraints (fairness mandates, right-to-explanation) are viable approaches to manage risks from RS-enabled robots.
Policy recommendations drawing on regulatory and market-design literature; conceptual proposals not empirically evaluated in this work.
RS-enabled personalization creates opportunities for platformization of social-robot services, producing data network effects, lock-in, and cross-selling possibilities for firms.
Market-structure analysis and economic theory applied to RS-enabled services; no empirical market data provided.
Ethical constraints can and should be treated as first-class inputs to the ranking/selection process (e.g., safety filters, fairness constraints) to ensure value alignment in robots.
Conceptual design recommendation grounded in constrained optimization literature; no empirical demonstrations provided.