The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (5539 claims)

Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 402 112 67 480 1076
Governance & Regulation 402 192 122 62 790
Research Productivity 249 98 34 311 697
Organizational Efficiency 395 95 70 40 603
Technology Adoption Rate 321 126 73 39 564
Firm Productivity 306 39 70 12 432
Output Quality 256 66 25 28 375
AI Safety & Ethics 116 177 44 24 363
Market Structure 107 128 85 14 339
Decision Quality 177 76 38 20 315
Fiscal & Macroeconomic 89 58 33 22 209
Employment Level 77 34 80 9 202
Skill Acquisition 92 33 40 9 174
Innovation Output 120 12 23 12 168
Firm Revenue 98 34 22 154
Consumer Welfare 73 31 37 7 148
Task Allocation 84 16 33 7 140
Inequality Measures 25 77 32 5 139
Regulatory Compliance 54 63 13 3 133
Error Rate 44 51 6 101
Task Completion Time 88 5 4 3 100
Training Effectiveness 58 12 12 16 99
Worker Satisfaction 47 32 11 7 97
Wages & Compensation 53 15 20 5 93
Team Performance 47 12 15 7 82
Automation Exposure 24 22 9 6 62
Job Displacement 6 38 13 57
Hiring & Recruitment 41 4 6 3 54
Developer Productivity 34 4 3 1 42
Social Protection 22 10 6 2 40
Creative Output 16 7 5 1 29
Labor Share of Income 12 5 9 26
Skill Obsolescence 3 20 2 25
Worker Turnover 10 12 3 25
Clear
Adoption Remove filter
The method is architecture-agnostic: uncertainty handling via parameter samples allows use of any deterministic neural-network architecture (e.g., quantile regressors, autoencoders) without specialized Bayesian layers.
Conceptual argument and demonstrations: authors implement a quantile emulator and an autoencoder-based ODE emulator as examples, showing the same uncertainty treatment applies to different network types.
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... applicability across network architectures (demonstrated via example implementat...
By sampling training parameter vectors from a calibrated posterior (via MCMC), the surrogate avoids training on unphysical or implausible parameter configurations.
Design choice described in methods: MCMC sampling is used to draw parameter samples from the model-parameter distribution/posterior, thereby focusing training data on plausible regions; no experiments provided here quantify frequency of unphysical samples under alternative schemes.
high positive MCMC Informed Neural Emulators for Uncertainty Quantificatio... proportion of training samples that fall in implausible/unphysical parameter reg...
Dataset and code (CFD, CFM, CFR) are publicly released.
Repository link provided in the summary (https://github.com/ZhengyaoFang/CFM) and paper states public release of dataset and code.
high positive Too Vivid to Be Real? Benchmarking and Calibrating Generativ... public availability of dataset and code
The Color Fidelity Dataset (CFD) is a large-scale dataset of over 1.3 million images containing both real photographs and synthetic T2I outputs, organized with ordered levels of color realism to support objective evaluation.
Dataset construction described in paper and repository: size stated as >1.3M images; contains a mixture of real photos and synthetic images annotated/organized with ordered realism labels enabling relative judgments of color fidelity.
high positive Too Vivid to Be Real? Benchmarking and Calibrating Generativ... dataset size and composition; presence of ordered color-realism labels enabling ...
The set of loss functions for which classical evaluation is possible includes expectation-based losses, kernel/MMD-like objectives, and other standard generative-model criteria (a broad loss-function scope).
Theoretical coverage and examples in the paper enumerating loss families (expectations, MMD, certain divergences) and showing how the classical-approximation results apply to each. The claim is supported by derivations and examples provided in the text.
high positive Universality of Classically Trainable, Quantum-Deployed Boso... scope of loss functions for which classical evaluation/approximation is feasible
A wide class of loss functions (including expectation-based losses and kernel/MMD-style objectives) and their gradients can be evaluated or efficiently approximated on a classical computer for BSBMs using recent classical-approximation results for expectation values in linear optics.
Theoretical argument in the paper leveraging recent classical-approximation results for expectation values in linear optics; covers expectation-based losses and kernel/MMD-like divergences and provides constructions/complexity statements showing efficient classical evaluation/approximation of these losses and, in many cases, their gradients. (The claim is based on proofs/derivations rather than empirical data.)
high positive Universality of Classically Trainable, Quantum-Deployed Boso... classical computability/approximation of loss values and gradients (time/complex...
PRF design decomposes into two independent dimensions: feedback source (where feedback text comes from) and feedback model (how that feedback is used to refine the query).
Paper's conceptual framing and controlled experiments that isolate and vary these two factors independently.
high positive A Systematic Study of Pseudo-Relevance Feedback with LLMs PRF design components (feedback source vs. feedback model)
The paper proposes specific operational and market recommendations: firms should invest in middleware and co-design partnerships; policymakers should fund shared QCSC infrastructure and workforce programs; researchers should prioritize interoperable middleware, scheduling models, and economic experiments on access-pricing.
Explicit recommendations section synthesizing prior architectural and economic analysis; prescriptive assertions based on conceptual arguments rather than experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer adoption of recommended investments/policies and their effect on access, standar...
Middleware standardization and interoperable APIs reduce switching costs and foster competition; lack of standards risks vendor lock-in and higher long-run costs.
Economic and systems-design argument drawing on well-understood effects of standardization in software ecosystems; no empirical QCSC-standardization case studies provided.
high positive Reference Architecture of a Quantum-Centric Supercomputer switching costs, level of competition, interoperability across QCSC offerings
QCSC reference architecture elements — e.g., QPU integration patterns, low-latency interconnects, orchestration and scheduling middleware, unified programming environments, data staging strategies — are required components to address current friction.
System decomposition and interface requirements derived from use-case analysis; proposed architecture components listed and motivated; no experimental validation.
high positive Reference Architecture of a Quantum-Centric Supercomputer presence/absence of specific architecture components and their theorized effect ...
The GNN provides greater stability (robustness over time and across conditions) than the MLP, with marked gains at low elevation angles where propagation is most variable.
Evaluation metrics in the experiments included stability/robustness over time and across elevation-angle conditions; reported performance shows larger relative gains for the GNN at low elevation angles.
high positive Federated Learning-driven Beam Management in LEO 6G Non-Terr... stability/robustness of beam predictions across time and elevation angles (espec...
A Graph Neural Network (GNN) model significantly outperforms a Multi-Layer Perceptron (MLP) baseline in beam prediction accuracy.
Supervised comparison reported in the paper between an MLP baseline and a GNN on realistic channel and beamforming data, evaluated with beam prediction accuracy metrics.
A strictly non-reciprocal interaction bias (directional/asymmetric effects between competitors) is necessary to suppress local fluctuations and produce a robust absorbing (permanent monopoly) state.
Theoretical analysis of absorbing states and stability conditions in the model, with supporting numerical simulations comparing symmetric versus non-reciprocal interaction rules (simulation counts unspecified). Results are internal to the model framework.
high positive Macroscopic Dominance from Microscopic Extremes: Symmetry Br... existence/probability of an absorbing (stable monopoly) state under symmetric vs...
Early advantage in discovering resources (transient superiority) is governed by extreme-value statistics of first-passage times: rare, fast discoveries determine which population gets early footholds.
Analytic derivation applying extreme-value theory to first-passage times in the paper's stochastic, spatially-structured population model; supported by numerical simulations of stochastic realizations (simulation details unspecified). This is a theoretical/computational result (no empirical data).
high positive Macroscopic Dominance from Microscopic Extremes: Symmetry Br... probability distribution of earliest discovery / identity of population achievin...
Policy recommendations include subsidizing complementary investments (data governance, training) rather than technology-only incentives; encouraging standards and interoperability; and funding evaluation studies to measure distributional effects and long-run productivity impacts.
Authors' policy section proposing these interventions based on case findings and broader policy implications.
high positive Optimizing integrated supply planning in logistics: Bridging... adoption of ISP, reduction in switching costs, quality of evaluation evidence, d...
The authors propose a conceptual optimisation framework emphasizing three pillars: digital integration (tech stack & data), collaboration (processes & governance), and continuous improvement (metrics, feedback loops).
Paper presents a conceptual framework derived from cross-case findings; theoretical/conceptual contribution rather than empirical estimation.
high positive Optimizing integrated supply planning in logistics: Bridging... framework components (no direct empirical outcome; intended to improve ISP imple...
Explanations must be tailored to stakeholders (clinicians, regulators, customers) and integrated into decision processes to be useful (human-centered design principle).
Thematic coding of design and HCI literature within the review; draws on empirical studies and design guidance recommending stakeholder-specific explanation formats and integration into decision workflows.
high positive Explainable AI in High-Stakes Domains: Improving Trust, Tran... usefulness / effectiveness of explanations for different stakeholder groups
The forecasting model was deployed with a human-in-the-loop mechanism that triggers on critical forecast deviations.
Pilot description in the paper documenting integration of H-in-the-loop rules for critical deviations during pilot deployment (single-case deployment evidence).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence and functioning of human-in-the-loop triggers for forecast deviations
The framework explicitly targets SME-specific risks (data scarcity, limited skills/budgets, and change resistance) and proposes mitigations such as staged pilots, human-in-the-loop designs, and clear governance.
Design rationale and operational recommendations within the paper addressing SME constraints (conceptual; no large-N testing).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence of SME-specific mitigation measures in the framework (staged pilots, H-...
An MLOps layer is included to provide continuous integration/deployment, monitoring, retraining, and governance for sustainable model maintenance.
Framework/component specification in the paper describing an MLOps layer and its responsibilities (conceptual design).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... presence of MLOps capabilities (CI/CD, monitoring, retraining, governance) in th...
The approach operationalizes AI adoption into seven sequential stages, each with specified deliverables, assigned roles, and gate/exit criteria.
Framework description in the paper enumerating seven sequential stages and documenting deliverables, role allocation, and gate criteria (conceptual / design artifact).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... number and specification of stages (operationalization of adoption process)
The paper proposes a practice-oriented, end-to-end algorithm for integrating AI into SME managerial decision loops grounded in CRISP-DM and extended with AI Canvas, an organizational digital-readiness assessment, and an MLOps layer.
Conceptual/framework development presented in the paper; synthesis of CRISP-DM, AI Canvas, a digital-readiness assessment, and an MLOps layer (no empirical sample required).
high positive ALGORITHM FOR IMPLEMENTING AI IN THE MANAGEMENT LOOP OF SMES... existence and content of the proposed AI adoption algorithm/framework (design el...
Standards and governance frameworks (for model auditability, security, and alignment) will become economic infrastructure influencing adoption costs and market trust.
Conceptual argument linking governance to adoption and trust, drawing on normative risk analysis; no empirical governance impact studies included.
high positive How AI Will Transform the Daily Life of a Techie within 5 Ye... existence and adoption of standards/governance frameworks and their effect on AI...
Increasing AI autonomy magnifies ethical, safety, and value‑alignment concerns; robust human oversight and institutional governance are required.
Normative and risk analysis based on projected increases in system autonomy and illustrative failure modes; no formal safety audits included.
high positive How AI Will Transform the Daily Life of a Techie within 5 Ye... need/extent of human oversight and governance mechanisms (existence and strength...
Models and systems must include robust governance: transparency, explainability, provenance logging, versioning, and compliance checks to maintain trust and satisfy auditors/regulators.
Normative claim supported by recommended governance and evaluation practices described in the paper; no regulatory testing or audit case studies reported.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... governance/compliance indicators (e.g., presence of explainability reports, audi...
Cloud and distributed compute (data lakes, distributed training, streaming pipelines) provide the scalability needed to handle growing data and model complexity in financial analytics.
Technical claim supported by proposed infrastructure components in the paper; no benchmarking or capacity measurements provided.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... scalability measures (e.g., throughput, latency under load, time to train models...
Such frameworks—designed to be modular, scalable, and interoperable—enable pluggable AI modules (scenario analysis, cash‑flow forecasting, dynamic pricing) and easier integration with ERP/BI systems.
Architectural claim supported by system design principles listed in the paper (modular model repositories, model-serving layers, feature stores, API integration); presented as design best-practices rather than empirical validation.
high positive Next-Generation Financial Analytics Frameworks for AI-Enable... system integration metrics (e.g., number of pluggable modules, integration time,...
A systematic RM process—risk identification → analysis/assessment → evaluation/response → control implementation → monitoring and reporting—is a core component of effective practice.
Convergence of process descriptions across ISO 31000, COSO ERM, and multiple reviewed publications identified via thematic analysis.
high positive The Role of Risk Management as an Organizational Management ... completeness/consistency of RM processes
Integration of risk management with strategy-setting and operational processes is essential to realize RM benefits.
Thematic findings from the literature review and recommendations in established frameworks (ISO 31000, COSO ERM); synthesized across peer-reviewed and practitioner literature.
high positive The Role of Risk Management as an Organizational Management ... alignment of RM with strategy and operations; realized RM benefits
An embedded risk culture and clear accountability across the organization are necessary enablers for effective risk management.
Repeatedly reported across reviewed literature and standards (e.g., ISO/COSO) in the thematic synthesis; supported by multiple secondary sources in the ten-year scope.
high positive The Role of Risk Management as an Organizational Management ... degree of RM cultural embedding; accountability; RM effectiveness
Leadership and governance commitment (board and senior management buy-in) is a core component required for effective risk management implementation.
Consistent identification of leadership/governance as an enabling factor across multiple peer-reviewed articles, books, and risk frameworks synthesized in the review; thematic analysis of literature over the last ten years.
high positive The Role of Risk Management as an Organizational Management ... effectiveness of risk management implementation / successful RM adoption
Actionable takeaway: organizations should measure inter-model similarity and response diversity as part of ROI and procurement analyses and factor in governance and role-redesign costs when estimating net returns to LLM deployment.
Explicit recommendation in the paper grounded in empirical analyses of output similarity and diversity metrics; presented as operational guidance rather than tested via field ROI studies.
high positive The Artificial Hivemind: Rethinking Work Design and Leadersh... inclusion of diversity metrics and governance cost estimates in ROI/procurement ...
The paper provides practical diagnostic tools and metrics (e.g., inter-model similarity, response entropy) for detecting and tracking AI homogenization in workflows.
Methodological section describing diagnostic framework and example metrics used in the empirical analyses (semantic similarity measures, entropy, distinct-n), intended for operational use.
high positive The Artificial Hivemind: Rethinking Work Design and Leadersh... operational diagnostic metrics (inter-model similarity, entropy, distinct-n)
Organizational responses to homogenization include leadership communication strategies, work redesign (contrarian roles, ensemble workflows, mandated diversity checks), and governance frameworks (auditing, procurement policies avoiding monoculture).
Prescriptive recommendations in the paper synthesizing empirical results with organizational-design principles; proposed interventions are not evaluated empirically in the paper but are presented as actionable responses.
high positive The Artificial Hivemind: Rethinking Work Design and Leadersh... proposed organizational interventions to preserve cognitive and stylistic divers...
The analysis dataset comprises approximately 26,000 real-world user queries paired with outputs from over 70 distinct language models spanning different providers, architectures, and scales.
Explicit data description in the paper: ≈26,000 queries and outputs from 70+ models (paper lists model sets and sampling procedures in methods section).
high positive The Artificial Hivemind: Rethinking Work Design and Leadersh... dataset size and model count
A one standard-deviation increase in AI adoption causally increases employment in occupations requiring complex problem-solving and interpersonal skills by 1.8%.
Same panel (38 OECD countries, 2019–2025) and AI Adoption Index; IV estimation with occupational employment classified by task type (complex problem-solving & interpersonal); fixed effects and robustness checks reported.
high positive Artificial Intelligence and Labor Market Transformation: Emp... Employment in complex problem-solving and interpersonal occupations (percent cha...
The paper proposes a research agenda prioritizing interoperable, ethical‑by‑design platforms; metrics to measure social equity impacts; and adaptation of global standards to local institutional capacities.
Explicit list of three prioritized research directions provided in the paper, derived from the systematic synthesis of the 103 items.
high positive Models, applications, and limitations of the responsible ado... research priorities and agenda items
High‑income examples (e.g., Estonia, Singapore) demonstrate mature integration of digital/AI systems in e‑government, urban mobility, and e‑health.
Empirical case examples drawn from the reviewed literature and institutional reports cited in the review; specific country examples (Estonia, Singapore) repeatedly referenced as mature adopters.
high positive Models, applications, and limitations of the responsible ado... integration maturity of AI/digital systems in e‑government, urban mobility, and ...
Research priorities include developing robust measures of AI adoption and using causal methods (difference-in-differences, synthetic controls, RDD, IV) to estimate effects of AI and regulation on productivity, employment, and inequality.
Methodological recommendations in the report based on identified evidence gaps and normative evaluation of empirical priorities.
high positive AI Governance and Data Privacy: Comparative Analysis of U.S.... quality of AI adoption measures and causal estimates for productivity, employmen...
The American Artificial Intelligence Initiative emphasizes R&D and innovation leadership, standards development, workforce readiness, and fostering 'trustworthy AI' (transparency, fairness, accountability).
Primary source policy documents from the U.S. American Artificial Intelligence Initiative reviewed in the report.
high positive AI Governance and Data Privacy: Comparative Analysis of U.S.... policy emphasis areas (R&D investment, standards, workforce readiness, trustwort...
Concrete legislative recommendations include amendments to the EU AI Act, Consumer Rights Directive, and Digital Services Act to operationalize model-level transparency and user choice rights.
Policy design: drafted candidate amendments tailored to existing EU instruments presented in the paper.
high positive The Global Landscape of Environmental AI Regulation: From th... proposed textual amendments to specified EU legislative instruments (existence o...
The paper introduces a Predictive Skill Gap Intelligence Hub — an AI-driven platform that combines macro- and micro-level indicators with probabilistic growth models and intelligent skill-synthesis to proactively forecast regional and sectoral labor demand–supply gaps.
Description of system architecture and modeling approach in the paper (methods section). No numerical evaluation metrics or datasets provided for this descriptive claim.
high positive AI-Based Predictive Skill Gap Analysis for Workforce Plannin... ability to forecast regional and sectoral labor demand–supply gaps (descriptive ...
Priority investments should target computational infrastructure, local model validation capacity, and training for clinicians and data scientists to increase adoption and trust in synthetic-data–supported AI.
Implementation and capacity-building analyses from the reviewed literature highlighting gaps in infrastructure, validation capability, and human capital; recommendation-based evidence rather than new empirical trials.
high positive On the use of synthetic data for healthcare AI in Africa: Te... availability of compute infrastructure, numbers/quality of local validation proj...
Vendor support, warranties, and service-level agreements (SLAs) are important for clinical adoption and liability management.
Policy and implementation literature, industry reports, and stakeholder feedback synthesized in the paper highlighting the role of vendor contractual commitments in adoption decisions.
high positive Framework for Government Policy on Agentic and Generative AI... clinical adoption / liability mitigation
Proprietary systems lead on reliability, maintenance, and validated integrations with clinical systems.
Literature synthesis including vendor case studies, deployment reports, and stakeholder surveys indicating more mature productization and validated integrations for proprietary offerings.
high positive Framework for Government Policy on Agentic and Generative AI... system reliability / maintenance burden / integration maturity
Open-source deployment options (e.g., on-premises) reduce data-sharing exposure and improve privacy.
Aggregated evidence from deployment reports and technical papers describing on-premises and local inference architectures; industry analyses of data governance tradeoffs.
high positive Framework for Government Policy on Agentic and Generative AI... data privacy / data-sharing exposure
Open-source models provide greater transparency and inspectability, enabling better auditability and explainability.
Systematic literature synthesis of peer-reviewed studies, industry reports, and case studies comparing open-source and proprietary systems; comparative analysis highlights inspectability of open-source code/models. No new primary experiments reported.
high positive Framework for Government Policy on Agentic and Generative AI... transparency / auditability / explainability
Coordinated policy reform, targeted infrastructure investment, workforce training, and equity-focused implementation are strategic priorities to realize AI’s potential in Indonesian healthcare.
Consensus recommendations drawn from the narrative synthesis, thematic analysis, and Delphi consensus studies included among the 42 supplementary documents and the broader 2020–2025 literature body.
high positive Artificial Intelligence in Healthcare in Indonesia: Are We R... policy adoption of coordinated reforms, level of infrastructure investment, work...
Recommended research priorities for economists include measuring how adoption changes task mixes and wages, quantifying verification/remediation costs, estimating productivity gains net of security/IP costs, and studying market dynamics from centralized model providers.
Author recommendations based on identified gaps in the empirical literature synthesized by the paper.
high positive ChatGPT as a Tool for Programming Assistance and Code Develo... generation of targeted empirical studies addressing task mix, wage impacts, veri...
Recommended policy levers include data-governance rules, provenance and watermarking standards, liability frameworks, copyright clarifications, competition policy, and taxes/subsidies to internalize externalities.
Policy recommendations synthesized from legal, regulatory, and economic literatures within the review; presented as qualitative guidance rather than tested policy interventions.
high positive Ethical and societal challenges to the adoption of generativ... effectiveness of specified regulatory instruments in mitigating harms from gener...