Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
Recommended research designs to estimate impacts include RCTs, quasi-experimental methods (difference-in-differences, regression discontinuity, matching), and longitudinal cohort tracking.
Paper explicitly lists these evaluation designs as appropriate methods for causal inference and long-term outcomes measurement. This is a methodological recommendation rather than an empirical claim.
There is a need for causal, longitudinal studies on how AI‑enabled fintech affects women's portfolio outcomes and on algorithmic interventions designed to reduce gender gaps.
Explicit statement in the paper noting limitations of existing literature (heterogeneity, limited longitudinal causal evidence, possible platform sample selection).
There is a need for empirical research to quantify net economic impact (productivity gains vs governance costs), effects on employment composition and wages, and market outcomes from alternative governance architectures.
Explicit research gaps listed in the paper; recommendation for future empirical strategies (difference-in-differences, event studies, randomized pilots, instrumental variables) and suggested data sources.
The article’s evidence is predominantly practitioner-driven and illustrative, relying on qualitative case evidence rather than systematic quantitative causal estimates.
Explicit statement in the paper’s Data & Methods section describing nature of evidence and limitations; methods listed include synthesis, comparative analysis, illustrative architectures, and anecdotal cases.
Key technical components of the pattern include low-code platforms for rapid governed app development, RPA for deterministic process automation and legacy integration, and generative AI for document understanding, conversational interfaces, and decision support — with guardrails.
Paper’s component list and rationale based on practitioner experience and multi-sector examples; presented as recommended components in the reference architecture; no experimental validation of component selection given.
The proposed layered deployment pattern integrates organizational governance (roles, policies, decision rights), technical architecture (platforms, APIs, data flows), and AI risk management (controls, monitoring, human-in-the-loop).
Design and architectural proposal within the paper; described via illustrative deployment patterns and reference architectures. This is a descriptive claim about the proposed pattern rather than an empirical effect.
Recommended next steps for validation include controlled pilots, before-after studies on operational metrics, and cross-firm panel analyses to estimate economic impacts and risk reductions.
Authors' explicit recommendations for empirical validation in the Data & Methods and Implications sections.
There is no reported large-scale quantitative evaluation (e.g., productivity gains, cost-benefit metrics, or causal impact estimates) supporting the framework in the paper.
Explicit limitation noted by the authors stating absence of large-scale quantitative evaluation.
The evidence base for the paper is qualitative: a synthesis of industry best practices and lessons from multi-sector enterprise implementations; methods used include conceptual framework development, architecture design, and case-based illustration.
Explicit methodological statement in the Data & Methods section of the paper.
The article is largely qualitative and prescriptive rather than empirical; it does not provide systematic incidence estimates or large-scale measured losses from prompt fraud and identifies empirical validation as needed.
Authors' stated methods and limitations: conceptual analysis, threat modeling, literature review, illustrative vignettes; explicit note of absent systematic empirical data.
SECaaS offerings commonly include threat intelligence, managed detection & response (MDR), endpoint protection, IAM, CASB, security orchestration/automation, and compliance-as-a-service.
Survey of SECaaS product categories in industry reports and vendor catalogs; technical benchmarks describing typical feature sets.
Achieving CIA in the cloud requires technical controls (encryption, access controls, IAM, MFA, zero-trust), resilience measures (backups, redundancy, DR/BCP), and continuous monitoring (logging, SIEM, EDR/XDR).
Synthesis of technical best practices and vendor/industry guidance; supported by technical evaluations and case studies in the literature.
Core cloud security goals remain confidentiality, integrity, and availability (CIA).
Canonical security literature and standards cited in the chapter; general consensus across technical controls and industry best-practice frameworks (e.g., NIST, ISO).
Evaluation methods reported commonly include visual inspection by researchers/clinicians, correlation with known biomarkers/frequency bands, and ablation/perturbation faithfulness tests; few studies report standardized quantitative metrics for robustness, stability, or neuroscientific fidelity.
Survey of evaluation practices across the literature compiled in the review.
Modeling approaches in the literature include end-to-end deep models operating on raw or time–frequency representations, recurrent architectures for temporal dynamics, attention mechanisms, and hybrid feature-based classifiers.
Summary of modeling choices described across reviewed studies.
Typical datasets used in EEG XAI research include public collections such as the TUH EEG Corpus, BCI Competition datasets, PhysioNet sleep databases, CHB-MIT for pediatric seizures, as well as many small/clinical cohorts.
Listing of commonly referenced datasets across the surveyed literature.
A common taxonomy emphasized in EEG XAI work distinguishes local vs global explanations, model-specific vs model-agnostic methods, and post-hoc vs intrinsically interpretable models.
Conceptual organization presented in the review synthesizing common taxonomic distinctions used by authors in the field.
XAI methods applied to EEG in the literature include gradient-based saliency methods, Integrated Gradients, layer-wise relevance propagation (LRP), CAM/Grad-CAM, occlusion/perturbation analyses, LIME, SHAP, TCAV, and counterfactual explanations.
Cataloging of explanation techniques reported across surveyed EEG papers.
Models used in EEG XAI work include deep learning architectures (CNNs, RNNs, attention/transformers), classical machine learning, and hybrid pipelines combining feature extraction with classifiers.
Summary of modeling approaches reported across reviewed studies.
The literature on EEG XAI covers tasks including seizure detection, sleep staging, brain–computer interfaces (BCI), cognitive/emotional state recognition, and diagnostic/supportive tools.
Descriptive review of topical coverage across surveyed papers; specific task categories enumerated in the review.
Limitation: the study analyzes national‑level formal policy texts only and does not measure enforcement, implementation outcomes, or public reactions.
Author‑stated limitations in the paper specifying scope restricted to formal policy documents and absence of empirical enforcement/compliance data.
The paper uses qualitative content analysis, coding documents against the four analytical dimensions to generate a comparative typology of coordination approaches.
Method description: manual qualitative coding of the 36 documents into the specified dimensions, producing the typology distinguishing Chinese and U.S. approaches.
The study's empirical basis comprises 36 national‑level policy documents (18 from China; 18 from the United States) focused on scientific data governance.
Author‑reported dataset and sampling description in the Data & Methods section.
The comparative analysis is organized across four dimensions: coordination objectives, institutional actors, governance mechanisms, and stakeholder legitimacy.
Methodological design reported in the paper; documents were coded against these four analytic categories.
The legal arguments create some uncertainty about scope and enforcement timelines; economic actors will respond to expected enforcement probabilities and expected sanctions, so clarity from regulators or courts will shape the ultimate economic effects.
Doctrinal acknowledgement of legal uncertainty combined with standard economic modeling of regulatory expectations; no empirical modeling in the Article.
The paper is primarily legal/policy scholarship rather than an empirical assessment of the prevalence or magnitude of discrimination in EdTech; it does not provide econometric estimates of harm.
Explicit limitation noted in the Article (self‑reported).
The Article's evidence consists of illustrative case law and statutory text rather than empirical datasets; it builds doctrinal chains, hypotheticals, and applications of statutory language to modern procurement and EdTech deployment models.
Explicit description of evidence and limits in the Article (self‑reported).
Methodologically, the paper uses doctrinal legal analysis and policy argumentation — close reading of federal civil‑rights statutes, administrative guidance, and judicial decisions interpreting 'recipient' and 'federal financial assistance.'
Explicit methodological statement in the Article (self‑reported).
The legal argument is grounded in statutory interpretation and precedent about the scope of 'recipient' and how federal financial assistance flows and influence should be understood.
Doctrinal analysis of statutes, administrative guidance, and judicial decisions cited and discussed in the Article.
The authors recommend empirical approaches for future work including randomized controlled trials in labs, before-after adoption studies, and collection of microdata on instrument usage, model versions, and provenance to measure impacts.
Explicit methodological recommendations in the Measurement and empirical research agenda section; these are proposals rather than executed studies.
There is a need for rigorous evaluation metrics and benchmarks for safety, reproducibility, and empirical studies quantifying productivity or scientific impact of LLM-driven instrument control.
Identified research gaps and recommended empirical research agenda described by the authors; these are recommendations rather than empirical findings.
The evidence presented consists mainly of qualitative arguments drawn from documented advances and discussion of prototypes; no controlled experimental evaluation is presented.
Authors' own description in the Data & Methods section about the nature of evidence supporting their perspective.
This paper is a conceptual perspective/review rather than an original empirical study.
Explicit statement in the Data & Methods section that the contribution is a perspective synthesizing literature and illustrative examples with no controlled experimental evaluation.
Modern microscopes are increasingly software-driven and data-intensive, while existing ML tools for microscopy are task-specific and fragmented.
Synthesis of recent literature on optical microscopes, detectors, and task-specific ML for image analysis referenced in the perspective (descriptive claim; no new empirical data collected).
Techno‑economic assessments (TEA) and life‑cycle analyses (LCA) are necessary research tools to compare bio‑routes to incumbent chemical synthesis on cost and emissions, and current literature is incomplete in this regard.
Review notes the presence of some TEA/LCA studies but highlights gaps and heterogeneity in methods and results across case studies; many processes lack published TEA/LCA at commercial scales.
Empirical grounding for behavioral-genetic claims and the Four Shell Model comes from the Agora-12 program dataset consisting of 720 agents producing 24,923 decision points.
Reported dataset and experimental sample: Agora-12 program (n = 720 agents; 24,923 decisions) used in analyses and validations.
Robustness checks include city and year fixed effects and heterogeneous-effect examinations by digital infrastructure level.
Reported robustness analyses in the paper: models controlling for city and time fixed effects and tests of heterogeneity by digital infrastructure purported to support the main findings (sample: 280 cities, 2008–2021).
The study's identification strategy treats the Demonstration Zone designation as a quasi-natural experiment using a staggered, multi-period DID across 280 prefecture-level cities (2008–2021).
Stated research design: multi-period difference-in-differences exploiting variation in timing of designation; sample comprises 280 prefecture-level cities over 2008–2021; results include city and time fixed effects.
The employment increase occurred without a corresponding increase in counts of formal cultural enterprises.
Secondary outcome analysis in the same DID framework on formal enterprise counts in the cultural sector using the 280-city panel (2008–2021); reported null effect on number of formal cultural enterprises.
Dataset composition: 261 publicly traded U.S. financial firms matched to CFPB complaint records, monthly observations covering 2018–2023.
Data description in the paper: CFPB complaint records matched to 261 firms with monthly panel from 2018 through 2023 used in all reported analyses.
The paper does not make strong causal claims; causal interpretation is limited and future work should address endogeneity and reverse causality (e.g., with event studies or instrumental variables).
Authors explicitly note limitations on causal interpretation and recommend methods (event studies, IVs, natural experiments) for future causal identification.
Fixed-effects panel path models are used to control for firm-level heterogeneity and to estimate direct and mediated relationships between complaint features and abnormal returns.
Econometric approach described: panel path models with firm fixed effects (monthly firm–level data for 261 firms, 2018–2023) to parse direct/mediated associations between complaint measures and returns.
Econometric approach relies on cross-country panel regressions and interaction terms to assess direct effects and complementarities; identification is associative (panel variation + controls) rather than claiming causal identification using instruments or natural experiments.
Paper describes use of panel regressions with interaction terms and emphasizes that identification comes from panel variation and covariate controls, without detailing stronger causal identification strategies.
Models control for key macroeconomic covariates (e.g., GDP per capita, trade openness, human capital, institutional quality) to isolate technology effects.
Paper documents inclusion of macro controls in regression models to reduce omitted-variable bias.
Dependent variable is a composite national Sustainable Development Goal (SDG) performance index (aggregate/summary measure).
Paper specifies the dependent variable as an aggregate SDG performance measure used in the panel regressions.
Unit of analysis is country-year observations for G20 members covering 2015–2023.
Paper states sample and scope as a cross-country panel of G20 economies from 2015–2023 (panel dataset). (Up to 20 countries × 9 years = up to 180 country-year observations, depending on coverage).
Analyses were conducted as intent-to-treat comparisons across arms, with hypothesis tests reported (including p-values) and principal stratification used for mechanism decomposition.
Methods statement: intent-to-treat comparisons, reported p-values for score differences, and use of principal stratification for separating total effect into adoption and effectiveness channels in the randomized trial (n = 164).
The primary outcomes analyzed were LLM adoption (use), exam score (grade points), and answer length.
Study’s stated primary outcomes in methods: adoption indicator, exam score on an issue-spotting exam, and answer length (measured). Sample size n = 164.
The study used a randomized controlled design with three arms: no LLM access, optional LLM access, and optional LLM access plus brief training.
Study methods description: randomized assignment of 164 law students to three experimental conditions as listed.
The intervention consisted of roughly a ten-minute training focused on how to use the LLM effectively.
Study description of the intervention in the randomized experiment (three-arm design with one arm receiving ~10-minute targeted training).