Evidence (2432 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Labor Markets
Remove filter
The study builds and calibrates an integrated system dynamics model that connects demographics, labor supply, economic output, and public finance.
Method: development and calibration of a system dynamics model using official statistics for demographics, labor, output, and fiscal variables (model structure and calibration described in paper).
The paper ends with policy implications and recommends periodic evaluation and the integration of AI-related risks into financial governance.
Policy recommendations section in the paper advocating for periodic evaluation and AI-risk integration into financial governance (method: prescriptive/policy analysis based on review findings).
Specialized SDE services that require further study are grouped and highlighted.
Section of the paper grouping and highlighting specialized services for future research (method: expert-driven identification from review; no quantitative prioritization stated).
We introduce a concise conceptual model of a 'shadow' project for designing SDE products or services, detailing participant roles and project composition.
Presentation of a conceptual model within the paper (method: model construction and descriptive exposition; no empirical testing/sample).
The paper proposes a clear classification of criminally oriented products and services in the SDE.
Taxonomy/classification produced in the paper (method: conceptual taxonomy from literature and analysis; no quantitative validation reported).
We identify a structured set of labor‑market roles within the SDE model.
Analytical identification and description of roles within the paper (method: conceptual modeling and qualitative role-mapping; sample size N/A).
We propose an integrated definition of the shadow digital economy that synthesizes technical and economic definitions.
Conceptual analysis and literature synthesis in the paper that combines technical and economic definitions into a single integrated definition (method: review/synthesis; no numeric sample).
General US employment for prime age workers (age 25–54) is currently high (~80%).
Paper cites a current employment rate of 80% for prime-age workers; likely based on national labor statistics though the exact data source and year are not specified in the excerpt.
The growth effect of AI exhibits industry heterogeneity: high‑tech manufacturing industries benefit more significantly.
Heterogeneity/subgroup regressions on the 2003–2017 Chinese industry panel showing larger estimated AI effects in high‑tech manufacturing sectors.
The positive effect of AI on industry growth increases over time.
Dynamic/DID analysis across the 2003–2017 panel showing that the estimated treatment effect grows larger in later periods.
The industry growth rate of the treatment group (industries with intensive AI application or high AI patent concentration) is significantly higher than that of the control group.
DID comparison between treatment and control industry groups in the China 2003–2017 panel, where treatment is defined by intensive AI application or AI patent concentration.
AI technology innovation has a significant positive impact on economic growth.
Industry panel data for Chinese industries from 2003 to 2017 analyzed using a differences-in-differences (DID) approach; main specification estimates effect of AI-related innovation on economic growth.
The weeder was equipped with a Raspberry Pi microcontroller and a camera module to detect crops and weeds in real-time, enabling autonomous operation.
Design description in the paper: hardware integration of Raspberry Pi and camera module for real-time detection (method: system design and implementation). No sample size or quantitative test data reported for detection accuracy in the provided summary.
Platform work accounts for 12.8% of labor income for participants in the studied sample.
Earnings and income calculations using platform transaction records combined with labor force survey and administrative income data for the 24-country sample (2015–2025).
Platform-mediated gig work has grown to represent 4.2% of total employment across 24 OECD countries (2015–2025).
Aggregate analysis of administrative data, national labor force surveys, and platform transaction records covering 24 OECD countries over the 2015–2025 period.
The study reframes AI as an augmentation mechanism rather than a substitute for managerial judgment and extends organizational decision theory to account for socio-technical decision systems.
Theoretical contribution asserted by the paper based on its literature synthesis and conceptual development (claim about extension of theory rather than empirical test).
The paper develops an integrative conceptual framework that explains how human judgment, algorithmic intelligence, and organizational context interact to shape decision quality and organizational outcomes.
Author-constructed conceptual framework based on synthesized literature across decision sciences, management, and information systems (framework described as output of the meta-analysis; no empirical validation reported in abstract).
The model was prompted to suggest jobs to 24 simulated candidate profiles balanced in terms of gender, age, experience and professional field.
Methods reported in the paper: experimental prompting of GPT-5 with N=24 simulated profiles, balanced across specified attributes.
This study evaluates how a state-of-the-art generative model (GPT-5) suggests occupations based on gender and work experience background for under-35-year-old Italian graduates.
Study design described in the paper: targeted population (under-35 Italian graduates), model used (GPT-5) and evaluation focus (occupation suggestions).
Common AI applications in accounting include transaction automation, invoice processing, reconciliations, fraud detection, anomaly detection, automated financial reporting, and predictive forecasting.
Descriptive listing drawn from academic and industry sources/case studies summarized in the paper.
Two regimes emerge: an inequality-increasing regime when AI is proprietary (concentrated control), rents concentrate because firms capture most gains (low ξ), and complementary assets are concentrated.
Model regime characterization and calibrated simulations showing rising firm profits and aggregate inequality under proprietary-AI assumptions and low rent-sharing elasticity.
Generative AI shifts economic value toward concentrated complementary assets (firm-level capital, proprietary data/algorithms), increasing firm profits and rents captured by asset owners.
Model results from a task-based framework with heterogeneous firms and complementary assets; calibration via MSM to six empirical moments; counterfactuals show increased profit shares when AI confers advantages to firms owning complementary assets.
Structural breaks in patenting dynamics are concentrated after 2010, consistent with an inflection in AI diffusion and commercialization.
Application of structural-break detection methods to patent filing time series (1980–2019) across domains; reported concentration of detected breakpoints after 2010. (Paper reports timing and clustering of breaks; exact statistical tests not enumerated in the summary.)
Patenting in AI-enhanced robotics experienced a sharp acceleration beginning in the early 2010s.
Observed marked upturn in the AI-enhanced robotics patent time series from the early 2010s onward (patent filings 1980–2019). Structural break tests applied to the time series identify an acceleration concentrated after 2010.
A dynamic Occupational AI Exposure Score (OAIES) that uses LLMs plus occupational task data can estimate time-varying, task-level AI exposure for occupations and workers.
Paper describes a concrete construction algorithm (task decomposition from O*NET/task inventories, LLM-based capability mapping, augmentation vs automation weighting, diffusion/adoption dynamics, and calibration to observed employment/wage/gross-flow changes). This is a proposed method rather than an applied/validated implementation.
From interview-based evidence the authors constructed a conceptual framework that integrates empirical insights with existing theories to explain how human–AI interaction alters design cognition.
Synthesis of qualitative interview findings with literature on creative cognition and design thinking; framework presented as an output of the study (framework construction described in paper).
A Random Survival Forest built on curated cancer‑death‑related genes (CDRG‑RSF) achieved the best long‑term prognostic performance among 14 tested ML algorithms for pancreatic cancer, with 3‑ and 5‑year AUCs > 0.7.
Comparison of 14 ML survival algorithms on curated prognostic genes; Random Survival Forest (CDRG‑RSF) reported superior 3‑ and 5‑year AUCs exceeding 0.7 (exact sample sizes/cohort details not provided in summary).
Experimental knockdown of PSME3 reduced proliferation and invasion and increased apoptosis in LUAD cells, implicating the PI3K/AKT/Bcl‑2 pathway as a mediator.
Functional assays (gene knockdown experiments) reported in the PIGRS study showing decreased proliferation/invasion and increased apoptosis after PSME3 knockdown, with pathway analyses implicating PI3K/AKT/Bcl‑2.
Deep neural networks (DNNs) better captured cross‑study differential expression (DEA) signals when predicting miRNA from mRNA than sparse linear models (LASSO); for HIV the cross‑study log2 fold‑change (log2FC) correlation was approximately R ≈ 0.59 for the DNN approach.
Analysis on seven paired viral infection datasets (including WNV and HIV); compared DNNs vs. LASSO for mRNA→miRNA prediction; reported cross‑study log2FC correlation R ≈ 0.59 for HIV for the DNNs. Methods included differential expression signal recovery across studies.
An AI‑powered pipeline (EPheClass) produced a parsimonious saliva microbiome classifier for periodontal disease with AUC = 0.973 using 13 features.
EPheClass pipeline using ensemble ML (kNN, RF, SVM, XGBoost, MLP), centred log‑ratio (CLR) transform and Recursive Feature Elimination (RFE); reported performance AUC = 0.973 for periodontal disease model with 13 features (sample size not specified in summary).
Higher job performance is positively associated with greater employee retention.
PLS-SEM analysis, N = 350. Reported direct path: Performance → Retention, β = 0.348, p < 0.001.
The paper identifies gaps and recommends that economists conduct randomized evaluations and quasi-experimental studies to estimate causal effects of interventions (hands-on labs, instructor training, compute subsidies) on competencies and earnings.
Policy and research agenda section of the paper arguing for randomized/quasi-experimental methods; no such causal interventions were implemented in this study.
The study conducted a cross-sectional online survey of more than 600 higher-education students and educators from multiple world regions.
Cross-sectional online survey; sample size reported as >600 participants; recruitment targeted a mix of disciplines and institution types; survey mapped to UNESCO 2024 AI competency frameworks.
Breakthroughs in structure prediction arise from end‑to‑end deep models that combine evolutionary information (MSAs, coevolutionary signals), geometric constraints and equivariant architectures, and large‑scale pretraining on sequence databases.
Paper describes methodological components: end‑to‑end architectures using MSAs, SE(3)/E(3)-equivariant layers, transformer‑based pretraining on UniRef/UniProt/metagenomic catalogs; no quantitative ablation studies are provided in the text.
Canada emphasizes teacher-led assessment, cautious regulation, and a focus on equity and professional development in responding to AI-related assessment issues.
Country case study based on Canadian policy documents and secondary sources highlighting teacher-led approaches and regulatory caution; illustrative description.
Creators explicitly name advertising, direct sales, affiliate marketing, and revenue-sharing models as common monetization channels for GenAI-enabled content.
Explicit references to these monetization channels appeared repeatedly across the 377 videos and were extracted during thematic coding.
Integrating AI (notably ML and NLP) meaningfully automates routine software engineering tasks across requirements management, code generation, testing, and maintenance.
Systematic literature review of prior AI-for-SE work combined with an empirical survey of software engineering professionals reporting usage and examples of tool-supported automation; sample size for the survey not specified in the summary.
Coordination-Risk Cues—task-conditioned priors on disagreement/tie rates—capture coordination difficulty across tasks.
Method description: disagreement/tie rates computed per cluster from pairwise preference comparisons to form priors indicating coordination risk. Data source: Chatbot Arena pairwise comparisons; tie/disagreement rate computation described but numeric values not provided here.
Capability Profiles—task-conditioned win-rate maps—can be computed per cluster to summarize agent strengths.
Method description: win-rate maps derived by computing agent win rates conditional on task clusters from the Chatbot Arena pairwise comparisons. Implementation reported in paper; no numeric summary of win-rate differences provided here.
Semantic clustering on Chatbot Arena pairwise comparisons induces an interpretable task taxonomy (taxonomy induction).
Methodological claim: authors applied semantic clustering to tasks/queries from Chatbot Arena pairwise preference data to produce clusters described as interpretable. Data source: Chatbot Arena pairwise comparisons; specific clustering algorithm and hyperparameters not specified here.
A speculative WikiRAT instantiation on Wikipedia illustrates RATs' design and potential uses.
The paper presents WikiRAT as a speculative prototype/illustration; no large-scale deployment or user study of WikiRAT is reported.
RATs record sequences of interaction: traversal (what is read and in what order), association (links and connections the reader forms), and reflection (annotations, notes, time spent), producing inspectable, shareable trajectories.
Design specification within the paper and description of data types RATs would collect (ordered page/navigation logs, hyperlinks followed, time-on-page, annotations, saved excerpts, tags, notes). This is a definitional claim about the proposed system rather than empirical measurement.
Dataset and code (CFD, CFM, CFR) are publicly released.
Repository link provided in the summary (https://github.com/ZhengyaoFang/CFM) and paper states public release of dataset and code.
The Color Fidelity Dataset (CFD) is a large-scale dataset of over 1.3 million images containing both real photographs and synthetic T2I outputs, organized with ordered levels of color realism to support objective evaluation.
Dataset construction described in paper and repository: size stated as >1.3M images; contains a mixture of real photos and synthetic images annotated/organized with ordered realism labels enabling relative judgments of color fidelity.
Standards and governance frameworks (for model auditability, security, and alignment) will become economic infrastructure influencing adoption costs and market trust.
Conceptual argument linking governance to adoption and trust, drawing on normative risk analysis; no empirical governance impact studies included.
Increasing AI autonomy magnifies ethical, safety, and value‑alignment concerns; robust human oversight and institutional governance are required.
Normative and risk analysis based on projected increases in system autonomy and illustrative failure modes; no formal safety audits included.
Models and systems must include robust governance: transparency, explainability, provenance logging, versioning, and compliance checks to maintain trust and satisfy auditors/regulators.
Normative claim supported by recommended governance and evaluation practices described in the paper; no regulatory testing or audit case studies reported.
Cloud and distributed compute (data lakes, distributed training, streaming pipelines) provide the scalability needed to handle growing data and model complexity in financial analytics.
Technical claim supported by proposed infrastructure components in the paper; no benchmarking or capacity measurements provided.
Such frameworks—designed to be modular, scalable, and interoperable—enable pluggable AI modules (scenario analysis, cash‑flow forecasting, dynamic pricing) and easier integration with ERP/BI systems.
Architectural claim supported by system design principles listed in the paper (modular model repositories, model-serving layers, feature stores, API integration); presented as design best-practices rather than empirical validation.
A systematic RM process—risk identification → analysis/assessment → evaluation/response → control implementation → monitoring and reporting—is a core component of effective practice.
Convergence of process descriptions across ISO 31000, COSO ERM, and multiple reviewed publications identified via thematic analysis.