The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (5126 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Adoption Remove filter
Poor data quality, fragmentation, and limited accessibility reduce model reliability and generalizability.
Survey of data characteristics and limitations presented in the paper; examples of biased or sparse datasets and the paper's discussion of impacts on model performance and transferability.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... model reliability/generalizability as a function of data quality, coverage, and ...
AI remains an augmenting technology rather than a standalone solution: no AI-only originated drug has yet achieved regulatory approval.
Review of drug-approval records and company disclosures summarized in the paper; explicit statement that to date no entirely AI-originated molecule has received full regulatory approval.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... regulatory approval status of AI-originated drug candidates (number of approvals...
Ethical and legal issues—patient privacy, algorithmic bias, intellectual property, and equitable access—pose risks to AI deployment in drug development.
Ethics and legal analyses, policy reports, and documented case examples collated in the review that identify these recurring concerns.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... ethical/legal risk incidence; privacy breaches; bias outcomes; access inequities
Regulatory uncertainty about validation standards and liability for AI tools raises investment risk and may slow deployment.
Regulatory and policy reports included in the narrative review describing evolving standards and open questions about validation, explainability, and liability for ML-based tools.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... regulatory clarity; investment risk and deployment timelines
Adoption of AI in drug R&D requires high upfront investment in data curation, compute infrastructure, and specialized talent.
Industry reports and economic analyses summarized in the review reporting capital and operational needs for building AI capabilities; qualitative synthesis rather than quantitative costing across firms.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... fixed upfront costs (data curation, compute, hiring/training)
Limited transparency and interpretability of many AI algorithms (black-box models) complicate clinical and regulatory trust and adoption.
Regulatory reports, methodological critiques, and case examples in the review highlighting interpretability concerns and their impact on clinical/regulatory acceptance.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... clinical/regulatory acceptance, trust, and adoption rates; explainability metric...
Performance of AI models in drug R&D depends on large, high-quality, and representative biomedical datasets; dataset bias or gaps substantially undermine model performance and generalizability.
Methodological literature and case studies cited in the review documenting failures or limited generalization when training data are biased, sparse, or non-representative; thematic synthesis rather than pooled quantification.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... model performance/generalizability across populations and contexts
Predictions from AI depend on data quality and coverage and still require experimental (wet-lab) validation.
Discussion of early failures and limits in case studies and expert observations within the narrative review; methodological argument about dependence of ML models on input data.
high negative Learning from the successes and failures of early artificial... predictive validity of computational models / need for experimental validation
High-quality, standardized, interoperable data (clean, annotated, connected across modalities) is a critical limiting factor for translating AI capability into sustained impact.
Conceptual emphasis and domain knowledge argument in the editorial; no empirical measurement of data quality's causal effect included.
high negative AI as the Catalyst for a New Paradigm in Biomedical Research ability to translate AI capability into sustained impact (dependent on data qual...
There is limited reporting on privacy safeguards, model interpretability, and external validity in the reviewed studies.
Review observed sparse reporting on privacy protections, interpretability analyses and external validation across included studies.
high negative Deep technologies and safer gambling: A systematic review. frequency/extent of reporting on privacy safeguards and interpretability (qualit...
Misclassification risks (false positives and false negatives) are a common limitation and can harm consumers by incorrectly restricting access or by failing to detect harm.
Review notes model error rates reported via precision/recall and AUC; discusses harms from false positives/negatives as a recurrent limitation in the literature.
high negative Deep technologies and safer gambling: A systematic review. model error rates and downstream consumer harm risk (false positive/negative imp...
Privacy and ethical concerns are substantial: continuous monitoring and sensitive behavioural inference raise privacy, surveillance, and misuse risks.
Multiple included studies and the review discussion explicitly identify privacy, ethical, and potential misuse concerns with continuous monitoring and behavioural inference.
high negative Deep technologies and safer gambling: A systematic review. privacy/ethical risk (qualitative concerns reported across studies)
The black-box nature of many deep learning models undermines scientific interpretability and experimental trust, limiting adoption in materials research.
Cited concerns and methodological papers advocating interpretable architectures and post hoc explanation methods reviewed in the paper; synthesis of community critique.
high negative Machine Learning-Driven R&D of Perovskites and Spinels: From... model interpretability and experimental adoption/trust
Insufficient attention to model reliability—particularly uncertainty miscalibration—reduces real-world utility because experimentalists need reliable confidence estimates, not only point predictions.
Survey of literature on uncertainty estimation and calibration (Bayesian NNs, ensembles, temperature scaling, conformal prediction) and papers reporting calibration issues; recommendations drawn from these sources.
high negative Machine Learning-Driven R&D of Perovskites and Spinels: From... calibration of predictive uncertainties (e.g., calibration error, coverage) and ...
Progress of DL-driven materials discovery is limited by scarcity of high-quality, diverse labeled datasets; small, noisy, or biased datasets limit model generalization.
Review and synthesis of empirical studies and methodological papers documenting dataset size/quality issues and their impact on model performance; no new dataset analysis in this paper.
high negative Machine Learning-Driven R&D of Perovskites and Spinels: From... model generalization / predictive performance on out-of-distribution materials o...
Traditional ESG ratings often suffered from data inconsistency, subjectivity and limited coverage of unstructured sustainability information.
Literature review and citations cited in the paper (e.g., Berg et al. 2022 and other ESG-rating divergence studies). This is presented as established background evidence rather than a new empirical finding in the study.
high negative Green Intelligence in Finance: Artificial Intelligence-Drive... Quality attributes of traditional ESG ratings: data consistency, subjectivity, c...
At the question level, incorrect chatbot suggestions substantially reduce caseworker accuracy, with a two-thirds reduction on easy questions where the control group performed best.
Question-level analysis from the randomized experiment comparing cases where chatbot suggestions were incorrect versus control; paper reports a ~66% reduction in accuracy on easy questions when chatbot suggestions were incorrect (exact denominators and statistics not provided in the excerpt).
high negative LLMs in social services: How does chatbot accuracy affect hu... caseworker accuracy on easy questions when presented with incorrect chatbot sugg...
Common barriers to ERM adoption in MSMEs include resource constraints and lack of expertise.
Findings from the literature review identifying determinants and barriers reported across studies (survey and qualitative studies commonly cited in such reviews); specific sample sizes/methods not provided in the summary.
high negative A Literature Review: Effect of Enterprise Risk Management (E... ERM adoption/implementation (barriers and determinants)
MSMEs are particularly vulnerable to external shocks because of limited financial resources, weak internal controls, and heavy dependence on owner-managers’ intuition.
Background literature summarized in the review describing common structural and governance characteristics of MSMEs; drawn from multiple sources in the literature (specific studies not cited in the summary).
high negative A Literature Review: Effect of Enterprise Risk Management (E... vulnerability to external shocks
The article identifies and lays out several concerns regarding the government's approach to regulating AI.
Analytical critique presented in the paper (legal/policy analysis summarizing potential regulatory shortcomings). Based on the author's review and argumentation rather than primary empirical data.
high negative Regulation and governance of artificial intelligence in Indi... adequacy and risks of the government's AI regulatory approach
Environmental regulations weaken the beneficial influence of generative AI on a company's ESG performance.
Moderation/interaction tests in the panel-data econometric model using measures of environmental regulation (on the same 2012–2024 Chinese A-share firm sample) showing a statistically significant negative interaction effect.
high negative How Can Generative AI Promote Corporate ESG Performance? Evi... corporate ESG performance (effect of generative AI moderated by environmental re...
Gaps in infrastructure readiness, digital awareness, and inclusive policy frameworks hinder equitable AI adoption among micro‑enterprises.
Cross‑study synthesis of barriers identified across the 55 included articles; infrastructural, awareness, and policy barriers are explicitly reported as recurring themes.
high negative Role of AI in Enhancing Work Efficiency and Opportunities fo... barriers to AI adoption (infrastructure readiness, digital awareness, policy inc...
Entrenched societal inequities imply that women and girls are often disproportionately held back from achieving their potential.
Broad claim referencing societal inequities and their effects on women and girls; stated in the introduction without specific empirical citations in the excerpt.
high negative Social Protection and Gender: Policy, Practice, and Research socioeconomic attainment of women and girls (e.g., income, education, empowermen...
Only 24.4% of at-risk workers have viable transition pathways, where 'viable' is defined as sharing at least 3 skills and achieving at least 50% skill transfer.
Analysis of job-to-job transitions on the validated knowledge graph using an operational definition of viable pathways (>=3 shared skills and >=50% skill transfer); proportion of at-risk workers meeting that criterion reported as 24.4% (underlying at-risk worker count not given in the excerpt).
high negative Graph-Based Analysis of AI-Driven Labor Market Transitions: ... percentage of at-risk workers with viable transition pathways (per defined thres...
20.9% of jobs in the dataset face high automation risk.
Risk classification applied to the jobs represented in the knowledge graph (sample size: 9,978 job postings); proportion of jobs labeled as 'high automation risk' is reported as 20.9%.
high negative Graph-Based Analysis of AI-Driven Labor Market Transitions: ... proportion of jobs classified as high automation risk
AI notably reduces customer stability in sports enterprises (SE).
Empirical estimation using the DML model on the same panel dataset of 45 Chinese listed SEs (2012–2023); authors report a statistically significant negative effect of AI on customer stability.
high negative Can Artificial Intelligence Enhance the Stability of Supply ... customer stability (component of supply chain stability)
Significant challenges persist for AI-enhanced GS-BESS deployment, including limited data availability, poor model generalization, high computational requirements, scalability issues, and regulatory gaps.
Barriers and limitations identified across the literature as reported in this systematic review (PRISMA-based synthesis). The excerpt does not enumerate which studies reported each barrier or provide prevalence statistics.
high negative Grid-Scale Battery Energy Storage and AI-Driven Intelligent ... Barriers to effective AI application and large-scale GS-BESS deployment (data av...
The sample is limited to Chinese A-share-listed design enterprises (2014–2023), which may limit generalizability to small and medium-sized enterprises (SMEs) or firms in other countries/regions.
Study sample description: A-share-listed design-oriented enterprises in China between 2014 and 2023; authors explicitly note this as a limitation.
high negative AI-driven design management: enhancing organizational produc... External validity / generalizability of results
Using TFP as a proxy for project efficiency aggregates effects at the firm level and therefore lacks micro-level insight into specific project workflows or design iteration processes.
Methodological limitation acknowledged in the paper: TFP is used as a firm-level proxy and the dataset does not include micro-level project workflow or iteration logs.
high negative AI-driven design management: enhancing organizational produc... Granularity of project-efficiency measurement (limitation of TFP proxy)
AI adoption in Slovakia consistently remained below the EU27 average over the 2021–2024 period.
Gap analysis comparing Slovak enterprise AI adoption indicators to EU27 averages using harmonised Eurostat data for 2021–2024.
high negative Artificial Intelligence Adoption and Labour Productivity in ... AI adoption rate among enterprises (Slovakia vs EU27 average)
The environmental footprint of healthcare systems is growing and persistent inequities in access and outcomes have intensified calls for procurement reform.
Contemporary literature review and synthesis of sector reports and studies documenting healthcare emissions/footprint and health inequities (no original empirical data reported in this paper).
high negative Greening the Medicaid Supply Chain: An ESG-Integrated Framew... environmental footprint of healthcare systems; inequities in access and health o...
There exists a systemic governance vacuum around GenAI, including gaps in privacy, accountability, and intellectual property protections.
Authors' synthesis of governance-related gaps reported across the 28 secondary studies and research agendas in the review.
high negative The Landscape of Generative AI in Information Systems: A Syn... adequacy of governance mechanisms for privacy, accountability, and intellectual ...
Societal and ethical risks—such as bias, misuse, and skill erosion—constrain GenAI adoption.
Themes synthesized from the reviewed literature (28 papers) reporting societal and ethical concerns associated with GenAI deployment.
high negative The Landscape of Generative AI in Information Systems: A Syn... societal-ethical risk level associated with GenAI (bias incidence, misuse potent...
Technical unreliability—manifesting as hallucinations and performance drift—is a major constraint on GenAI adoption.
Recurring identification of technical reliability issues (hallucinations, performance drift) in the 28 reviewed papers and authors' aggregation of technical risks.
high negative The Landscape of Generative AI in Information Systems: A Syn... technical reliability of GenAI systems (frequency/severity of hallucinations and...
Adoption of GenAI is constrained by multiple interrelated challenges.
Cross-paper synthesis from the systematic review of 28 studies identifying recurring barriers and constraints reported in the literature.
high negative The Landscape of Generative AI in Information Systems: A Syn... level/extent of GenAI adoption (barriers to adoption)
Ongoing issues remain such as data access, model transparency, ethical concerns, and the varying relevance across Global North and Global South contexts.
Critical synthesis within the review drawing on discussions and critiques in the literature about barriers and ethical challenges; based on reported limitations and regional comparisons in reviewed studies (no numerical breakdown provided).
high negative Advancing Urban Analytics: GeoAI Applications in Spatial Dec... barriers to GeoAI adoption and trustworthy use: data accessibility, model interp...
There are significantly negative spatial spillover effects between digital–real integration and New Quality Productive Forces (i.e., each variable has negative spillover impacts on the other across regions).
Spatial spillover coefficients estimated in the GS3SLS spatial simultaneous equations model using panel data for 30 provinces (2011–2022) are reported as statistically significant and negative.
high negative Spatial Interplay Between Digital–Real Integration and New Q... Spatial spillover effects of Digital–Real Integration and New Quality Productive...
Key implementation challenges include data quality and integration, model interpretability, cybersecurity and privacy, regulatory/compliance uncertainty, skills gaps among accounting professionals, and implementation costs.
Identified by the paper through literature review and practitioner reports; these are presented as recurring barriers rather than quantified with a specific sample.
high negative Role of Artificial Intelligence in the Accounting Sector incidence/severity of implementation barriers (data quality scores, integration ...
Many studies on serious-game DSTs are small-scale or experimental, and long-term impact data on behavioral change and emissions outcomes are sparse, limiting generalizability.
Review of the literature summarized in the chapter showing predominance of case studies, prototypes, and short-term evaluations rather than longitudinal or large-sample studies.
high negative Serious games and decision support tools: Supporting farmer ... Study scale/sample size, duration of follow-up, evidence on long-term behavior c...
Ensuring scientific validity of game models, scaling co-design processes, measuring real-world behavioral change, and aligning incentives (policy/subsidies, markets) are remaining challenges to using serious games for DST uptake.
Chapter discussion of limitations and gaps identified in the reviewed literature; absence or sparsity of long-term validation studies and large-scale co-design implementations documented in existing research.
high negative Serious games and decision support tools: Supporting farmer ... Model validity (accuracy vs. empirical data), scalability of co-design processes...
Current uptake of DSTs for net zero remains limited because of issues of trust, usability, lack of evidence linking actions to farm profitability, and poor integration into farmer workflows.
Literature synthesis, qualitative interviews and surveys, case studies documenting low adoption and barriers; multiple practice reports and studies cited in the chapter. Many studies report limited or uneven adoption across contexts.
high negative Serious games and decision support tools: Supporting farmer ... DST adoption/use rates; reported barriers (trust, usability, integration)
Nearby business closures increased perceived impediments to growth, amplifying pessimism via local exposure (social contagion effect).
Empirical comparison of perceived impediments to growth across variation in local exposure to nearby business closures (survey measures of local closures correlated with respondents' perceived impediments), using the cross-country survey sample.
high negative Peer Influence and Individual Motivations in Global Small Bu... perceived impediments to growth
The information-theoretic uncertainty measure provides a mechanism-level explanation for why deception value falls as transparency increases (residual uncertainty explains utility changes).
Analytical linkage in the model connecting the entropy-like residual uncertainty metric to equilibrium utility changes; theoretical argument and derivation in the paper.
high negative Evaluating Synthetic Cyber Deception Strategies Under Uncert... relationship between residual attacker uncertainty (entropy-like) and change in ...
The value of deception degrades (falls) as the true system state becomes more observable; this degradation is quantifiable via the price-of-transparency metric.
Analytical definition of price of transparency as marginal change and supporting theoretical results; computational experiments that sweep observability/transparency levels (simulated experiments, parameter sweeps; number of scenarios not specified).
high negative Evaluating Synthetic Cyber Deception Strategies Under Uncert... value of deception as a function of observability; price of transparency (margin...
The paper derives closed-form bounds and break-even conditions that delineate when deception is ineffective due to cost or detectability.
Theoretical proofs and closed-form inequalities presented in the analytical section (derivations of bounds and break-even conditions).
high negative Evaluating Synthetic Cyber Deception Strategies Under Uncert... value of deception (conditions where value ≤ 0 or falls below cost thresholds)
If deployed without mitigation, GenAI CDS risks widening disparities by performing worse on underrepresented groups or being unequally distributed across resource-rich versus resource-poor settings.
Fairness literature, subgroup performance concerns, and distributional risk analysis cited in the paper; direct empirical demonstrations of widened disparities due to GenAI CDS are limited in the literature per the paper.
high negative GenAI and clinical decision making in general practice differences in performance/outcomes across demographic and socioeconomic groups;...
Limited public datasets and vendor lock-in constrain independent reproducible evaluations and audits of current generative models in healthcare.
Observation and policy analysis in the paper noting scarcity of public clinical datasets for state-of-the-art models and proprietary constraints; no dataset counts provided.
high negative GenAI and clinical decision making in general practice availability of public datasets; reproducibility of model evaluations; number of...
GenAI CDS creates data privacy and security risks because of high-value medical data and use of external cloud services.
Known cybersecurity risks and documented incidents in health IT; the paper cites the general risk context rather than specific breach sample counts tied to GenAI deployments.
high negative GenAI and clinical decision making in general practice data breaches; unauthorized access incidents; compliance violations
GenAI CDS can amplify bias and inequities if training data underrepresent groups or reflect historical disparities.
Fairness and robustness audit literature and subgroup performance analyses referenced in the paper; specific empirical demonstrations for contemporary GenAI CDS are limited and sample sizes not given.
high negative GenAI and clinical decision making in general practice performance disparities across demographic subgroups; differential error rates; ...
GenAI CDS systems hallucinate and can produce incorrect but plausible recommendations, which can cause patient harm if trusted unchecked.
Documented failure modes of generative models and examples from controlled evaluations; the paper references known hallucination behavior from model audits and case reports, though it does not quantify incidence rates or provide large-scale observational harm data.
high negative GenAI and clinical decision making in general practice adverse events; erroneous recommendations; clinician reliance/misuse leading to ...