The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (4333 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Governance Remove filter
Shadow AI — unsanctioned, decentralized use of GenAI tools — amplifies prompt-fraud risk by bypassing central controls and audit trails.
Conceptual analysis and organizational risk reasoning; references to common practices of unsanctioned tool use (no empirical prevalence data).
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... increase in unmonitored prompt activity and corresponding reduction in detectabi...
External actors can commit prompt fraud via customer-facing systems or social-engineering prompt chains.
Conceptual threat scenarios and mapping of attack surfaces (customer-facing interfaces, input channels); illustrative examples provided.
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... risk of prompt-fraud initiated through external-facing inputs or social-engineer...
Internal actors manipulating prompts within authorized AI workflows are a realistic and important threat vector for prompt fraud.
Threat modeling and scenario-based analysis highlighting insiders with authorized access who can craft prompts.
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... risk or incidence of prompt-fraud events originating from internal actors
Prompt fraud can defeat controls that rely on plausibility, standard formatting, or human review that trusts model-like language.
Threat mapping and literature on automation bias; illustrative vignettes showing how machine-like outputs mimic authoritative formats.
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... effectiveness of plausibility/format/human-review-based controls in identifying ...
Prompt fraud lowers the entry cost of producing convincing fraudulent artifacts, increasing the ease with which attackers can create plausible forgeries.
Economic reasoning and conceptual analysis based on GenAI behavior and illustrative scenarios (no empirical cost or frequency data).
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... marginal cost (effort/resources) required to produce convincing fraudulent artif...
Prompt fraud — the intentional manipulation of natural-language prompts to cause generative AI systems to produce misleading, fabricated, or deceptive artifacts that bypass internal controls — constitutes a novel, low-cost fraud vector that traditional IT- and process-focused controls are ill-equipped to detect or prevent.
Conceptual analysis and threat modeling grounded in literature/regulatory review and illustrative vignettes; no systematic empirical incidence data provided.
medium negative Prompt Engineering or Prompt Fraud? Governance Challenges fo... ability of existing IT/process controls to detect or prevent fraud produced via ...
Secure infrastructure (including SECaaS-provided tools) affects the availability and trustworthiness of AI training data and models; breaches reduce returns to AI R&D via direct losses and reduced trust.
Conceptual linkage supported by case studies of data/model theft and technical literature on secure enclaves, differential privacy, federated learning; no broad quantitative estimate provided.
medium negative Security- as- a- service: enhancing cloud security through m... incidence of data/model breaches, economic returns to AI R&D
Security externalities (one firm's breach raising ecosystem risk) complicate private incentives and may justify policy interventions such as standards or mandatory reporting.
Economic theory on externalities, case studies showing spillovers from breaches, and policy analyses recommending interventions.
medium negative Security- as- a- service: enhancing cloud security through m... spillover risk, incentive alignment, justification for regulation
Concentration among large cloud/SECaaS providers can create market power, platform dependency, and affect competition in AI markets.
Market-structure theory, observed concentration patterns in industry reports, and qualitative case studies; no causal estimates provided in the chapter.
medium negative Security- as- a- service: enhancing cloud security through m... market power indicators, competition measures in AI markets
Latency and integration frictions can limit the suitability of SECaaS for specialized workloads, including some AI pipelines.
Technical evaluations and benchmarks that measure latency/resource overhead; reports and case studies noting integration challenges for high-throughput or low-latency workloads.
medium negative Security- as- a- service: enhancing cloud security through m... latency, integration overhead, suitability for AI workloads
Reliance on a small set of major cloud/SECaaS providers creates vendor lock-in, concentration risk, and systemic vulnerability if a major provider is compromised.
Market-structure discussions, observed provider outages and incidents (case studies), and theoretical arguments about concentration; no single causally identified empirical estimate provided.
medium negative Security- as- a- service: enhancing cloud security through m... market concentration, systemic risk, dependency risk
Without improvements in robustness, consistency, and neuroscientific validity of explanations, clinical uptake will be constrained, slowing commercialization and reducing returns for developers focused only on performance.
Synthesis and forward-looking argument linking methodological deficits documented in the literature to likely reduced market adoption; no direct empirical market impact measurement provided.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... clinical uptake, commercialization pace, developer returns
Weak or inconsistent explanations increase regulatory and medico-legal risk; standardized, validated XAI can lower compliance costs and liability exposure.
Logical inference connecting explanation reliability to regulatory scrutiny and liability concerns, presented as an implication in the review (no direct empirical legal analysis provided).
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... regulatory/compliance and legal risk
Preprocessing pipelines (filtering, artifact removal such as ICA, re-referencing, segmentation) materially affect XAI outputs.
Review cites multiple studies and methodological notes showing explanation maps vary with preprocessing choices; effect reported qualitatively across papers.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... sensitivity of explanation outputs to preprocessing steps
There is a scarcity of human/clinical validation studies testing whether explanations improve clinician decision-making or align with clinical reasoning.
Observation from literature survey: few reviewed works include clinician studies or longitudinal/clinical impact evaluations.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... presence/absence of human/clinical validation
Identified methodological limitations include sensitivity of explanations to hyperparameters and preprocessing choices, inconsistent explanations across similar inputs, and poor correlation with known neurophysiology.
Synthesis of reported failure modes and limitations from multiple EEG-XAI studies reviewed in the paper.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... stability/consistency of explanations and alignment with neurophysiological know...
Most studies focus on qualitative visualizations (e.g., heatmaps) rather than quantitative, reproducible metrics for explanation quality; few evaluate neuroscientific validity or clinical usefulness, and robustness to noise and preprocessing is often untested.
Review-level assessment of evaluation practices across papers, noting prevalence of visual inspection and scarcity of standardized quantitative metrics or clinical validation.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... evaluation rigor: qualitative vs quantitative; assessment of robustness and clin...
Current explainability methods for EEG frequently lack robustness, consistency, and alignment with neuroscientific knowledge, limiting their trustworthiness and practical utility.
Aggregate observations from reviewed EEG-XAI studies noting inconsistent attributions, sensitivity to analysis choices, and few studies that validate explanations against neuroscientific markers or clinical endpoints.
medium negative Explainable Artificial Intelligence (XAI) for EEG Analysis: ... robustness/consistency/neuroscientific validity of explanations (trustworthiness...
Divergent governance regimes increase the risk of data localization, interoperability frictions, and regulatory fragmentation — raising costs for multinational AI development and limiting global model generalizability.
Policy‑level comparative inference from contrasting national approaches identified in the document analysis and related literature on cross‑border data governance; no direct measurement of costs or model generalizability in the paper.
medium negative Balancing openness and security in scientific data governanc... data localization, interoperability frictions, regulatory fragmentation, costs t...
State‑led coordination can rapidly mobilize resources and scale national champions, altering competitive dynamics and potentially creating winner‑take‑most outcomes.
Theoretical inference from document evidence of state mobilization and developmentalist goals in Chinese texts, combined with literature on state coordination and industrial scaling (no empirical competition measures in the paper).
medium negative Balancing openness and security in scientific data governanc... market concentration / competitive dynamics (winner‑take‑most)
AI systems trained on incomplete, adult-centric, or high-income–biased data risk perpetuating inequities in prediction, resource allocation, and policy recommendations for children and LMICs.
Data-justice and algorithmic fairness literature cited conceptually in the review; applies generalizable concerns about biased training data to the One Health/child-health context without empirical bias audits in this paper.
medium negative Safeguarding future generations: a One Health perspective on... equity and fairness of AI-driven predictions and allocation decisions affecting ...
Data gaps, especially child-specific and cross-sectoral One Health data, reduce the reliability and fairness of AI-driven disease prediction and economic models.
Methodological argument grounded in the review of data availability; authors connect observed surveillance gaps to model limitations—no empirical model performance analyses presented.
medium negative Safeguarding future generations: a One Health perspective on... reliability and fairness metrics of AI-driven disease forecasting and economic m...
Fragmented governance and funding structures hinder cross-sectoral prevention and response for child-centered One Health challenges.
Policy analyses and governance literature synthesized in the review; narrative evidence of siloed funding and governance limiting cross-sector action (no quantitative governance metrics provided).
medium negative Safeguarding future generations: a One Health perspective on... effectiveness of cross-sector prevention and response mechanisms for child healt...
Integrated One Health research and policy implementation are limited—particularly in LMICs—creating gaps in prevention and response for children.
Policy, programmatic, and academic literature reviewed; authors note under-representation of LMIC contexts and limited cross-sectoral integration in the published literature and surveillance systems.
medium negative Safeguarding future generations: a One Health perspective on... degree of One Health research integration and policy implementation affecting ch...
Geographic ranges of many vectors and zoonoses are shifting (due to climate and land-use change), increasing children's exposure in new areas.
Ecological and epidemiological modeling studies and surveillance trends cited in the review indicating range shifts for some vectors/zoonoses; evidence is region- and agent-specific and heterogeneously reported.
medium negative Safeguarding future generations: a One Health perspective on... geographic incidence and exposure risk of vector-borne and zoonotic infections a...
Extreme weather events amplify children's exposure to pathogens and degrade health infrastructure and services.
Disaster and public-health case studies and surveillance reports summarized in the review documenting post-event increases in infectious disease exposure and disruptions to services; narrative evidence, context-dependent.
medium negative Safeguarding future generations: a One Health perspective on... post-disaster infectious disease incidence and health-service disruption metrics...
Climate change intensifies direct harms to children (heat injury, extreme weather injury) and indirect harms (food insecurity, mental health impacts, shifting disease ecologies).
Climate-health literature and sectoral reports synthesized; references to observational studies and modeling showing associations between climate events and the listed harms (no pooled effect sizes).
medium negative Safeguarding future generations: a One Health perspective on... incidence of heat-related illness, injury from extreme weather, food insecurity ...
Pediatric and neonatal AMR pose distinct clinical and surveillance challenges compared to adult AMR.
Clinical literature and surveillance reports synthesized in the review highlighting differences in pathogen spectra, dosing, diagnostics, and reporting for pediatric/neonatal populations; narrative description without quantitative synthesis.
medium negative Safeguarding future generations: a One Health perspective on... adequacy of clinical management and surveillance sensitivity for AMR in pediatri...
Children are disproportionately exposed to antimicrobial-resistant pathogens via clinical care, community transmission, food chains, and environmental contamination.
Synthesis of clinical studies, community surveillance reports, food-safety literature, and environmental microbiology studies; review notes pediatric and environmental sources but provides no pooled prevalence estimates.
medium negative Safeguarding future generations: a One Health perspective on... exposure/infection rates with antimicrobial-resistant pathogens in children
Children's dependence on caregivers and local ecosystems (for nutrition, shelter, sanitation) increases vulnerability to ecosystem-level shocks.
Social and public-health literature integrated in the review describing caregiver-mediated dependence and ecosystem service reliance; qualitative and observational evidence rather than quantitative pooled estimates.
medium negative Safeguarding future generations: a One Health perspective on... child health outcomes mediated by caregiver capacity and local ecosystem integri...
Children are uniquely vulnerable within the One Health nexus because physiological immaturity, developmental sensitivity, behavior-driven exposures, and ecosystem dependence make them disproportionately affected by AMR, climate change, and emerging zoonotic/vector-borne infections.
Narrative synthesis of interdisciplinary peer-reviewed studies, surveillance reports, and policy literature; biological and epidemiological reasoning rather than a pooled quantitative analysis; heterogeneous and cross-disciplinary evidence summarized by the authors.
medium negative Safeguarding future generations: a One Health perspective on... overall child vulnerability to AMR, climate change, and zoonotic/vector-borne in...
Holding schools liable under federal civil‑rights statutes is sometimes possible but often insufficient to prevent or remediate harms caused by EdTech products.
Policy argumentation and doctrinal analysis with hypotheticals and illustrative cases demonstrating enforcement limitations when only schools are targeted (no empirical prevalence data).
medium negative Civil Rights and the EdTech Revolution effectiveness of school‑only liability in preventing/remediating EdTech discrimi...
Resource-rich labs and firms are likely to adopt LLM orchestration faster, which could widen gaps in research capacity between institutions and countries unless mitigated by policy choices.
Equity and diffusion argument based on resource requirements (compute, data, validation); no adoption-rate data or cross-institution comparisons provided.
medium negative ChatMicroscopy: A Perspective Review of Large Language Model... adoption rates across institutions, disparities in research capacity
There is potential for 'winner-take-most' market outcomes if a few players combine superior models, instrument control software, and exclusive datasets.
Economics reasoning about network effects and data concentration; no empirical market concentration metrics specific to microscopy provided.
medium negative ChatMicroscopy: A Perspective Review of Large Language Model... market concentration and distribution of market share among firms
Upfront investments required for compute, data labeling, validation, and safety testing may raise entry costs and favor incumbents.
Economic logic about fixed costs and scale advantages; no measured entry-cost or firm-dynamics data provided.
medium negative ChatMicroscopy: A Perspective Review of Large Language Model... entry costs and competitive dynamics (incumbent advantage)
There is a risk of deskilling for some technical roles, creating implications for training and workforce development.
Theoretical reasoning about automation-induced deskilling; no empirical study or measured skill changes provided.
medium negative ChatMicroscopy: A Perspective Review of Large Language Model... level of technical skill required for routine roles and training needs
Regulatory frameworks often lack tools for algorithmic accountability, data portability, and cross-border enforcement for platformed services.
Policy and regulatory studies reviewed in the paper; assessment based on gap analysis rather than new regulatory audit data.
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... availability of regulatory tools (algorithmic accountability, data portability);...
Algorithmic bias—stemming from training data, feature selection, or proxy variables—can produce systematic discrimination (for example, gendered access to credit).
Reviewed empirical and methodological studies on algorithmic fairness; paper cites documented instances and outlines mechanisms but does not present original audit data.
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... disparate treatment/outcomes by demographic group (e.g., gender) in credit decis...
Data asymmetry and differential digital footprints create information advantages for platforms and reinforce borrower segmentation.
Theoretical argument supported by literature on data externalities and platform information advantages; illustrated with case examples rather than new data analysis.
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... information asymmetry metrics; borrower segmentation (heterogeneity in credit of...
Differential digital literacy, device/infrastructure access, and biased data-driven decision rules can exclude or disadvantage groups.
Conceptual synthesis and references to documented cases of digital divides and algorithmic bias in existing literature; no new empirical measurement provided.
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... access disparities by digital literacy/device access; biased decision outcomes (...
Without deliberate governance, platformization can amplify exclusion through data asymmetries, algorithmic bias, gendered barriers, infrastructure gaps, and market concentration.
Literature synthesis and illustrative examples of platform dynamics and algorithmic decision rules; no systematic causal estimates in the paper.
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... exclusion (access disparities by gender, connectivity, digital literacy); market...
FinTech simultaneously creates new structural inequalities and systemic risks.
Argumentative synthesis of theoretical and empirical work across development finance and regulatory studies; illustrative case examples referenced (e.g., platform market effects and algorithmic decision-making).
medium negative Financial Inclusion in the Age of FinTech Platforms: Opportu... inequality (distributional outcomes); systemic financial risk
Multipolar competition in AI increases risks of fragmented regulations, export control cascades, and inefficient duplication of standards, producing large economic coordination and collective‑action costs.
Theoretical argument and literature synthesis on international political economy of standards and controls; no novel quantitative cost estimates, though the paper recommends empirical research avenues to quantify these costs.
medium negative Smart Power and the Transformation of Contemporary Internati... regulatory fragmentation, standard duplication, and associated economic costs
AI‑driven information operations, recommendation systems, and content economies alter market incentives, advertising revenues, and the political economy of attention—creating externalities not priced in markets.
Interpretive synthesis of literature on digital platforms, misinformation, and attention economics; supported by cited secondary studies and policy examples rather than new empirical measurement.
medium negative Smart Power and the Transformation of Contemporary Internati... market incentives, advertising revenue distribution, and unpriced externalities ...
Competition over AI standards, data governance norms, and platform rules is an economic contest with long‑run market structure implications (network effects, winner‑take‑most outcomes).
Theoretical synthesis drawing on platform economics and standards literature; supported by qualitative examples of standard‑setting contests but without new quantitative market structure analysis.
medium negative Smart Power and the Transformation of Contemporary Internati... market concentration and distributional outcomes in platform/AI markets (network...
Export controls, sanctions, investment screening, and tech diplomacy function as economic levers of smart power and reshape global AI supply chains, FDI flows, and comparative advantage.
Policy‑focused evidence and examples cited in the literature review and case studies; proposed policy event‑study approaches are suggested but no original empirical event study is presented.
medium negative Smart Power and the Transformation of Contemporary Internati... structure of AI supply chains, cross‑border FDI flows, and comparative advantage
The digital/AI era changes both the tools (new technological instruments of influence) and the targets (information environments, data infrastructures), creating novel governance and collective‑action problems.
Conceptual analysis supported by literature synthesis on digital platforms, AI, surveillance, and information operations; illustrative examples from policy and secondary studies rather than original empirical measurement.
medium negative Smart Power and the Transformation of Contemporary Internati... emergence of new governance/collective‑action problems related to digital/AI too...
Framing policy as 'Digital Sovereignty' supports data‑localization and stronger cross‑border constraints, which will affect multinational fintechs and cross‑border credit/data services.
Policy-framing and international governance analysis in the compendium; inference about cross‑border regulatory impacts rather than measured effects.
medium negative Diego Saucedo Portillo Sauceport Research degree of data localization measures enacted, changes in cross‑border data flows...
Mandatory white‑box transparency and audit requirements are likely to favor firms that can afford compliance (larger incumbents and certified auditors), potentially raising barriers to entry for small fintechs unless mitigated by proportional rules or sandboxes.
Economic inference and market-structure analysis presented in the "Market structure & competition" section; no empirical panel or field data (theoretical reasoning).
medium negative Diego Saucedo Portillo Sauceport Research barriers to entry / market concentration / number of small fintech entrants
Poorly calibrated rules may unintentionally restrict product offerings or increase costs for low‑income borrowers if compliance expenses are passed through.
Risk analysis and economic reasoning in the compendium; projection based on standard pass‑through and market equilibrium logic (no empirical measurement provided).
medium negative Diego Saucedo Portillo Sauceport Research product availability, costs (interest rates/fees) for low‑income borrowers