The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (4137 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
Clear
Governance Remove filter
Long-term effects of adaptive marketing (habit formation, churn, lifetime value) are important for welfare and valuation but are harder to measure and require longitudinal or structural economic models.
Conceptual claim in measurement challenges; argues that short-horizon A/B tests may miss long-run harms or benefits, recommending longitudinal studies and structural models; no empirical long-term study presented.
high null result Personalized Content Selection in Marketing Using BERT and G... long-term churn rates, habit formation indicators, lifetime value (LTV)
Offline evaluation metrics (intent/sentiment classification accuracy, human-rated generation quality and factuality, simulated policy evaluation) are useful for pipeline development but do not fully capture online performance.
Paper contrasts offline metrics with online A/B testing and notes the need for online experiments; this is a methodological claim supported by the described evaluation pipeline rather than a presented empirical study.
high null result Personalized Content Selection in Marketing Using BERT and G... offline classification accuracy, human-rated generation quality vs online CTR/en...
Another important gap is quantifying complementarities between AI and different skill types (evaluative vs. generative tasks).
Review observation that existing empirical work has not systematically quantified how AI productivity gains vary with worker skill composition and complementary roles.
high null result ChatGPT as an Innovative Tool for Idea Generation and Proble... magnitude of complementarities between AI assistance and various human skill typ...
Key research gaps include a lack of long-run causal evidence on the effects of LLMs on firm-level innovation rates, business formation, and industry structure.
Explicit identification of gaps in the literature within the nano-review; the review states that most studies are short-term, task-level, or descriptive.
high null result ChatGPT as an Innovative Tool for Idea Generation and Proble... long-run causal impacts of LLM adoption on firm innovation, business formation, ...
High-priority research includes randomized controlled trials on hybrid vs. automated routing, long-run studies on labor markets in service sectors, and models quantifying trust externalities and governance costs.
Paper's stated research agenda based on identified evidence gaps and limitations (lack of randomized long-run studies).
high null result The Effectiveness of ChatGPT in Customer Service and Communi... research output (RCTs, long-run studies, models) addressing the specified gaps
Current evidence is promising but early: case studies, pilot deployments, and short-run experiments dominate; long-run causal evidence on labor and welfare effects is limited.
Explicit methodological assessment in the paper noting source types (deployments, pilots, vendor reports, short-run experiments) and limitations (heterogeneity, lack of randomized controls, short horizons).
high null result The Effectiveness of ChatGPT in Customer Service and Communi... quality and duration of evidence (study types, presence of randomized controls)
The authors elicited additional insights via a survey of paper authors plus follow-up interviews to collect self-assessments of reproducibility and qualitative explanations for obstacles and motivations.
Methods section describing the mixed-methods approach: empirical reproduction attempts triangulated with surveys and interviews of original authors.
high null result On the Computational Reproducibility of Human-Computer Inter... use of surveys and interviews as data sources for qualitative corroboration and ...
Reproducibility (as used in this study) is defined as producing the reported results from the shared data and analysis code, distinct from replicability which involves independent recollection of data.
Authors' definitional statement in the paper clarifying reproducibility vs. replicability.
high null result On the Computational Reproducibility of Human-Computer Inter... operational definition of 'reproducibility' (ability to re-run provided data+cod...
Study limitations include reliance on perceptual measures (rather than solely objective performance), heterogeneity across institutional samples, and likely correlational rather than strictly causal identification.
Authors' own noted limitations in the paper's methods section: mixed-methods design using perceptions from questionnaires and interviews, sample heterogeneity across multinational institutions, and quantitative analyses that are associative rather than strictly causal.
high null result Human-AI Synergy in Financial Decision-Making: Exploring Tru... validity/causal identification of study findings
Measurement and research gaps (data scarcity, informality) complicate robust economic assessment of AI impacts; improved metrics, granular labour and firm‑level data, and mixed‑methods evaluation are required.
Methodological critique based on reviewed literature and identified gaps; no new data collection in the paper.
high null result Towards Responsible Artificial Intelligence Adoption: Emergi... availability and granularity of labour and firm-level datasets, prevalence of mi...
There is a lack of causal evidence on the long-run impacts of AI-driven HRM on employment, wages, and firm survival—this is a key research gap identified by the review.
Explicitly stated research gap in the review based on assessment of methodologies and findings across the 47 included studies.
high null result Data-Driven Strategies in Human Resource Management: The Rol... availability of causal studies on long-run employment, wage, and firm survival i...
A systematic review following PRISMA identified 47 peer-reviewed studies (2012–2024) on data-driven HRM and workforce resilience from Scopus, Web of Science, and Google Scholar.
Explicit review protocol and search/screening results reported by the paper (PRISMA-based), final sample size = 47 studies.
high null result Data-Driven Strategies in Human Resource Management: The Rol... number of studies included in the review
Recommended research designs to estimate impacts include RCTs, quasi-experimental methods (difference-in-differences, regression discontinuity, matching), and longitudinal cohort tracking.
Paper explicitly lists these evaluation designs as appropriate methods for causal inference and long-term outcomes measurement. This is a methodological recommendation rather than an empirical claim.
high null result Curriculum engineering: organisation, orientation, and manag... employment probabilities, earnings, long-term career outcomes (as targeted by th...
There is a need for causal, longitudinal studies on how AI‑enabled fintech affects women's portfolio outcomes and on algorithmic interventions designed to reduce gender gaps.
Explicit statement in the paper noting limitations of existing literature (heterogeneity, limited longitudinal causal evidence, possible platform sample selection).
high null result Women's Investment Behaviour and Technology: Exploring the I... existence/absence of causal longitudinal evidence on fintech impacts by gender
There is a need for empirical research to quantify net economic impact (productivity gains vs governance costs), effects on employment composition and wages, and market outcomes from alternative governance architectures.
Explicit research gaps listed in the paper; recommendation for future empirical strategies (difference-in-differences, event studies, randomized pilots, instrumental variables) and suggested data sources.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... N/A (research agenda statement)
The article’s evidence is predominantly practitioner-driven and illustrative, relying on qualitative case evidence rather than systematic quantitative causal estimates.
Explicit statement in the paper’s Data & Methods section describing nature of evidence and limitations; methods listed include synthesis, comparative analysis, illustrative architectures, and anecdotal cases.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... N/A (methodological statement)
Key technical components of the pattern include low-code platforms for rapid governed app development, RPA for deterministic process automation and legacy integration, and generative AI for document understanding, conversational interfaces, and decision support — with guardrails.
Paper’s component list and rationale based on practitioner experience and multi-sector examples; presented as recommended components in the reference architecture; no experimental validation of component selection given.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... N/A (component inclusion/design)
The proposed layered deployment pattern integrates organizational governance (roles, policies, decision rights), technical architecture (platforms, APIs, data flows), and AI risk management (controls, monitoring, human-in-the-loop).
Design and architectural proposal within the paper; described via illustrative deployment patterns and reference architectures. This is a descriptive claim about the proposed pattern rather than an empirical effect.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... N/A (architectural/design composition)
There is a need for empirical research (empirical studies quantifying prompt-fraud incidents and losses, field experiments comparing control portfolios, and economic models of optimal investment in AI controls).
Explicit research agenda and limitations acknowledged by the authors noting lack of empirical prevalence data and need for operational validation.
high null result Prompt Engineering or Prompt Fraud? Governance Challenges fo... existence of empirical knowledge gaps and research priorities
Recommended next steps for validation include controlled pilots, before-after studies on operational metrics, and cross-firm panel analyses to estimate economic impacts and risk reductions.
Authors' explicit recommendations for empirical validation in the Data & Methods and Implications sections.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... feasibility of empirical validation designs and future measurement (research des...
There is no reported large-scale quantitative evaluation (e.g., productivity gains, cost-benefit metrics, or causal impact estimates) supporting the framework in the paper.
Explicit limitation noted by the authors stating absence of large-scale quantitative evaluation.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... existence/absence of large-scale quantitative evaluation
The evidence base for the paper is qualitative: a synthesis of industry best practices and lessons from multi-sector enterprise implementations; methods used include conceptual framework development, architecture design, and case-based illustration.
Explicit methodological statement in the Data & Methods section of the paper.
high null result Governed Hyperautomation for CRM and ERP: A Reference Patter... type of evidence and methods used (qualitative, case-based, conceptual)
The article is largely qualitative and prescriptive rather than empirical; it does not provide systematic incidence estimates or large-scale measured losses from prompt fraud and identifies empirical validation as needed.
Authors' stated methods and limitations: conceptual analysis, threat modeling, literature review, illustrative vignettes; explicit note of absent systematic empirical data.
high null result Prompt Engineering or Prompt Fraud? Governance Challenges fo... presence (or absence) of systematic empirical incidence estimates and measured l...
SECaaS offerings commonly include threat intelligence, managed detection & response (MDR), endpoint protection, IAM, CASB, security orchestration/automation, and compliance-as-a-service.
Survey of SECaaS product categories in industry reports and vendor catalogs; technical benchmarks describing typical feature sets.
high null result Security- as- a- service: enhancing cloud security through m... catalog of SECaaS services offered
Achieving CIA in the cloud requires technical controls (encryption, access controls, IAM, MFA, zero-trust), resilience measures (backups, redundancy, DR/BCP), and continuous monitoring (logging, SIEM, EDR/XDR).
Synthesis of technical best practices and vendor/industry guidance; supported by technical evaluations and case studies in the literature.
high null result Security- as- a- service: enhancing cloud security through m... effectiveness of security posture (ability to maintain CIA)
Core cloud security goals remain confidentiality, integrity, and availability (CIA).
Canonical security literature and standards cited in the chapter; general consensus across technical controls and industry best-practice frameworks (e.g., NIST, ISO).
high null result Security- as- a- service: enhancing cloud security through m... security objectives (confidentiality, integrity, availability)
Evaluation methods reported commonly include visual inspection by researchers/clinicians, correlation with known biomarkers/frequency bands, and ablation/perturbation faithfulness tests; few studies report standardized quantitative metrics for robustness, stability, or neuroscientific fidelity.
Survey of evaluation practices across the literature compiled in the review.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... types of evaluation methods used to assess explanations
Modeling approaches in the literature include end-to-end deep models operating on raw or time–frequency representations, recurrent architectures for temporal dynamics, attention mechanisms, and hybrid feature-based classifiers.
Summary of modeling choices described across reviewed studies.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... specific modeling strategies applied to EEG
Typical datasets used in EEG XAI research include public collections such as the TUH EEG Corpus, BCI Competition datasets, PhysioNet sleep databases, CHB-MIT for pediatric seizures, as well as many small/clinical cohorts.
Listing of commonly referenced datasets across the surveyed literature.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... datasets employed in EEG XAI studies
A common taxonomy emphasized in EEG XAI work distinguishes local vs global explanations, model-specific vs model-agnostic methods, and post-hoc vs intrinsically interpretable models.
Conceptual organization presented in the review synthesizing common taxonomic distinctions used by authors in the field.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... taxonomic classification of explanation types
XAI methods applied to EEG in the literature include gradient-based saliency methods, Integrated Gradients, layer-wise relevance propagation (LRP), CAM/Grad-CAM, occlusion/perturbation analyses, LIME, SHAP, TCAV, and counterfactual explanations.
Cataloging of explanation techniques reported across surveyed EEG papers.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... types of XAI techniques used
Models used in EEG XAI work include deep learning architectures (CNNs, RNNs, attention/transformers), classical machine learning, and hybrid pipelines combining feature extraction with classifiers.
Summary of modeling approaches reported across reviewed studies.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... model architectures applied to EEG tasks
The literature on EEG XAI covers tasks including seizure detection, sleep staging, brain–computer interfaces (BCI), cognitive/emotional state recognition, and diagnostic/supportive tools.
Descriptive review of topical coverage across surveyed papers; specific task categories enumerated in the review.
high null result Explainable Artificial Intelligence (XAI) for EEG Analysis: ... task domains addressed by EEG XAI studies
Limitation: the study analyzes national‑level formal policy texts only and does not measure enforcement, implementation outcomes, or public reactions.
Author‑stated limitations in the paper specifying scope restricted to formal policy documents and absence of empirical enforcement/compliance data.
high null result Balancing openness and security in scientific data governanc... study scope and limitations (no enforcement/implementation measurement)
The paper uses qualitative content analysis, coding documents against the four analytical dimensions to generate a comparative typology of coordination approaches.
Method description: manual qualitative coding of the 36 documents into the specified dimensions, producing the typology distinguishing Chinese and U.S. approaches.
high null result Balancing openness and security in scientific data governanc... methodological approach (qualitative content analysis / coding)
The study's empirical basis comprises 36 national‑level policy documents (18 from China; 18 from the United States) focused on scientific data governance.
Author‑reported dataset and sampling description in the Data & Methods section.
high null result Balancing openness and security in scientific data governanc... dataset size and composition (number of documents by country)
The comparative analysis is organized across four dimensions: coordination objectives, institutional actors, governance mechanisms, and stakeholder legitimacy.
Methodological design reported in the paper; documents were coded against these four analytic categories.
high null result Balancing openness and security in scientific data governanc... analytic framework / coding schema
Child-specific surveillance across human, animal, and environmental domains is sparse, limiting understanding of pediatric One Health risks.
Authors' methodological assessment based on literature search and review; explicit limitation stated that standardized child-focused surveillance data are lacking and heterogeneous across sectors.
high null result Safeguarding future generations: a One Health perspective on... coverage and granularity of child-specific surveillance data in One Health domai...
The legal arguments create some uncertainty about scope and enforcement timelines; economic actors will respond to expected enforcement probabilities and expected sanctions, so clarity from regulators or courts will shape the ultimate economic effects.
Doctrinal acknowledgement of legal uncertainty combined with standard economic modeling of regulatory expectations; no empirical modeling in the Article.
high null result Civil Rights and the EdTech Revolution degree of enforcement uncertainty and its effect on economic actor behavior
The paper is primarily legal/policy scholarship rather than an empirical assessment of the prevalence or magnitude of discrimination in EdTech; it does not provide econometric estimates of harm.
Explicit limitation noted in the Article (self‑reported).
high null result Civil Rights and the EdTech Revolution whether the Article provides empirical prevalence/magnitude estimates
The Article's evidence consists of illustrative case law and statutory text rather than empirical datasets; it builds doctrinal chains, hypotheticals, and applications of statutory language to modern procurement and EdTech deployment models.
Explicit description of evidence and limits in the Article (self‑reported).
high null result Civil Rights and the EdTech Revolution type of evidence used (doctrinal/case law vs. empirical data)
Methodologically, the paper uses doctrinal legal analysis and policy argumentation — close reading of federal civil‑rights statutes, administrative guidance, and judicial decisions interpreting 'recipient' and 'federal financial assistance.'
Explicit methodological statement in the Article (self‑reported).
high null result Civil Rights and the EdTech Revolution research method used in the Article
The legal argument is grounded in statutory interpretation and precedent about the scope of 'recipient' and how federal financial assistance flows and influence should be understood.
Doctrinal analysis of statutes, administrative guidance, and judicial decisions cited and discussed in the Article.
high null result Civil Rights and the EdTech Revolution basis of the Article's legal theory (statutory and precedent grounding)
The authors recommend empirical approaches for future work including randomized controlled trials in labs, before-after adoption studies, and collection of microdata on instrument usage, model versions, and provenance to measure impacts.
Explicit methodological recommendations in the Measurement and empirical research agenda section; these are proposals rather than executed studies.
high null result ChatMicroscopy: A Perspective Review of Large Language Model... recommended empirical metrics: throughput, cost, error rates, time-to-discovery,...
There is a need for rigorous evaluation metrics and benchmarks for safety, reproducibility, and empirical studies quantifying productivity or scientific impact of LLM-driven instrument control.
Identified research gaps and recommended empirical research agenda described by the authors; these are recommendations rather than empirical findings.
high null result ChatMicroscopy: A Perspective Review of Large Language Model... gap in evaluation infrastructure and lack of benchmarks for LLM-driven instrumen...
The evidence presented consists mainly of qualitative arguments drawn from documented advances and discussion of prototypes; no controlled experimental evaluation is presented.
Authors' own description in the Data & Methods section about the nature of evidence supporting their perspective.
high null result ChatMicroscopy: A Perspective Review of Large Language Model... availability and type of empirical evidence for claims (qualitative/prototype vs...
This paper is a conceptual perspective/review rather than an original empirical study.
Explicit statement in the Data & Methods section that the contribution is a perspective synthesizing literature and illustrative examples with no controlled experimental evaluation.
high null result ChatMicroscopy: A Perspective Review of Large Language Model... type of scholarly contribution (conceptual review)
Modern microscopes are increasingly software-driven and data-intensive, while existing ML tools for microscopy are task-specific and fragmented.
Synthesis of recent literature on optical microscopes, detectors, and task-specific ML for image analysis referenced in the perspective (descriptive claim; no new empirical data collected).
high null result ChatMicroscopy: A Perspective Review of Large Language Model... degree of software control and data volume/intensity in modern microscopy system...
Techno‑economic assessments (TEA) and life‑cycle analyses (LCA) are necessary research tools to compare bio‑routes to incumbent chemical synthesis on cost and emissions, and current literature is incomplete in this regard.
Review notes the presence of some TEA/LCA studies but highlights gaps and heterogeneity in methods and results across case studies; many processes lack published TEA/LCA at commercial scales.
high null result Harnessing Microbial Factories: Biotechnology at the Edge of... existence and comprehensiveness of TEA/LCA studies for documented bio-processes;...
Empirical grounding for behavioral-genetic claims and the Four Shell Model comes from the Agora-12 program dataset consisting of 720 agents producing 24,923 decision points.
Reported dataset and experimental sample: Agora-12 program (n = 720 agents; 24,923 decisions) used in analyses and validations.
high null result Model Medicine: A Clinical Framework for Understanding, Diag... Sample size and decision-point count used to support empirical claims (720 agent...