The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (7156 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Faithful extraction—aligning LLM-extracted arguments with formal AF primitives and ensuring fidelity to source evidence—is a key technical challenge.
Paper's explicit identification of failure modes and alignment issues; grounded in documented limitations of IE/LLMs (no empirical quantification here).
high negative Argumentative Human-AI Decision-Making: Toward AI Agents Tha... fidelity/alignment error rate between extracted elements and source evidence
Computational argumentation approaches have required heavy feature engineering and domain-specific knowledge to be effective.
Conceptual claim grounded in prior work and practical experience reported in the literature; no quantitative cost estimates provided in the paper.
high negative Argumentative Human-AI Decision-Making: Toward AI Agents Tha... engineering cost / domain modeling effort required for AF-based systems
Automation bias (human tendency to defer to automated outputs) compounds the risk that GLAI errors become embedded in legal processes.
Behavioral literature review on automation bias and trust in AI systems; applied to legal-context vignettes. No primary empirical test within the paper.
high negative Why Avoid Generative Legal AI Systems? Hallucination, Overre... likelihood of human operators deferring to GLAI outputs (automation bias effect)
A key architectural risk is interoperability failure and fragmentation across vendors and protocols in agent ecosystems.
Comparative analysis with IoT and other platform histories showing vendor/protocol fragmentation; argument is conceptual and illustrative rather than empirically measured for future agent ecosystems.
high negative The Internet of Physical AI Agents: Interoperability, Longev... degree of interoperability and fragmentation across vendors/protocols
Domains such as disaster response, healthcare, industrial automation, and mobility will be affected and are safety‑critical, where failures have high social and economic cost.
Domain examples and policy reasoning; draws on general knowledge about those sectors and potential harms; no new empirical damage quantification provided in the paper.
high negative The Internet of Physical AI Agents: Interoperability, Longev... social and economic costs of failures in safety‑critical domains
IoT digitized perception at scale but exposed limitations such as fragmentation, weak security, limited autonomy, and poor sustainability.
Historical and comparative analysis of IoT deployments and literature cited illustratively in the paper; qualitative evidence from prior IoT incidents and ecosystem studies rather than new empirical data.
high negative The Internet of Physical AI Agents: Interoperability, Longev... levels of fragmentation, security robustness, autonomy, and sustainability in Io...
Cooperation with the AI plateaus and never reaches the near-complete cooperation levels observed in human–human interactions.
Time-series/trajectory analysis of cooperation rates in the lab human–AI experiment (n = 126) compared to the human–human benchmark (n = 108); reported convergence/end-state cooperation levels show AI condition asymptotes below the human–human condition.
high negative Playing Against the Machine: Cooperation, Communication, and... cooperation rate over time and asymptotic/end-state cooperation level
A single malicious or compromised LLM agent with high stubbornness and persuasive power can trigger a persuasion cascade that steers the collective opinion of a multi-agent LLM system (MAS).
Theoretical analysis using the Friedkin–Johnsen (FJ) opinion-formation model (analysis of fixed points and influence propagation) plus simulation experiments mapping LLM-MAS interactions to FJ dynamics across multiple network topologies and attacker profiles. (Paper reports simulation results but does not provide exact sample sizes in the provided summary.)
high negative Don't Trust Stubborn Neighbors: A Security Framework for Age... extent of adversarial sway / shift in collective opinion (final consensus and op...
Adoption requires hardware (VR headsets, capable GPUs) and integration effort, implying upfront capital expenditure for labs/observatories.
Paper explicitly notes hardware requirements (VR headsets, capable GPUs) and integration effort as part of adoption considerations; common-sense assessment of required capital.
high negative iDaVIE v1.0: A virtual reality tool for interactive analysis... upfront capital expenditure and integration effort required for adoption
Current models heavily rely on large static datasets and batch training and exhibit poor lifelong/continual learning.
Synthesis of common practices in contemporary ML (supervised pretraining and offline training paradigms); no new experiments provided.
high negative Why AI systems don't learn and what to do about it: Lessons ... continual learning performance; dependence on dataset size and batch training
When identical replies are labeled as coming from AI rather than from a human, recipients report feeling less heard and less validated (an attribution effect).
Controlled attribution labeling experiment within the study: identical replies presented with different source labels (AI vs. human) and recipient-rated perceptions of being heard/validated measured.
high negative Practicing with Language Models Cultivates Human Empathic Co... recipient-rated feelings of being heard and validated
HindSight scores are negatively correlated with LLM-judged novelty (Spearman ρ = −0.29, p < 0.01), indicating LLM judges tend to overvalue novel-sounding ideas that do not materialize in the literature.
Reported Spearman correlation between HindSight scores and LLM-judged novelty across the generated ideas; ρ = −0.29 with p < 0.01. Interpretation that LLMs overvalue novel-sounding ideas is drawn from the negative correlation.
high negative HindSight: Evaluating LLM-Generated Research Ideas via Futur... Correlation between HindSight score (downstream impact) and LLM-judged novelty s...
Barriers to adoption include toolchain cost, trace data storage/transfer demands, IP-security concerns when sharing traces, and organizational inertia.
Listed as practical caveats and limitations in the summary; based on authors' experience and reasoning rather than quantified study.
high negative ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... adoption barriers (cost, storage, security, organizational factors)
Adoption requires up-front investment in tooling and infrastructure for deterministic capture/replay, plus management of large trace data and integration with existing validation/IP/security workflows.
Authors explicitly list these practical caveats in the summary: needs tooling/infrastructure, trace data management, and integration with validation flows and IP/security constraints. (Descriptive claim based on implementation experience; no cost figures provided.)
high negative ODIN-Based CPU-GPU Architecture with Replay-Driven Simulatio... required tooling/infrastructure and trace-data management burden
The empirical validation is performed only on synthetic text-preference data rather than real-world user populations, so field deployment effects and richer preference models remain to be tested.
Experiments section states synthetic dataset for text preferences and notes absence of field experiments on real user populations.
high negative Finding Common Ground in a Sea of Alternatives scope of empirical validation (synthetic dataset vs. real-world data)
The theoretical results (algorithms and sample-complexity bounds) assume truthful, exogenous preferences and simple sampling access; strategic behavior or costly reporting could change the information requirements.
Modeling assumptions explicitly stated in the paper (sampling access to truthful preferences) and discussion in the implications/limitations section noting the need to consider strategic behavior and reporting costs.
high negative Finding Common Ground in a Sea of Alternatives applicability limitations given model assumptions (truthful sampling access vs. ...
Matching information-theoretic lower bounds are proved, establishing that no algorithm can guarantee finding an (approximate) proportional veto-core element with fewer queries than the stated bounds (i.e., the sample complexity is optimal).
Lower-bound proofs in the theoretical section of the paper showing impossibility results that match the upper-bound rates.
high negative Finding Common Ground in a Sea of Alternatives information-theoretic lower bound on sample/query complexity (optimality claim)
Static ACLs evaluate deterministic rules that ignore partial execution paths and therefore can only capture a subset of organizational constraints.
Formal argument and examples showing static ACLs map to Policy functions that do not depend on partial_path; illustrative limitations presented.
high negative Runtime Governance for AI Agents: Policies on Paths coverage of organizational constraints by static ACLs (proportion of constraints...
Runtime evaluation imposes additional compute, latency, logging, and engineering costs that increase the marginal cost of deploying agents.
Operational discussion in the paper outlining additional runtime compute and logging requirements; cost implications argued qualitatively; no empirical cost measurements provided.
high negative Runtime Governance for AI Agents: Policies on Paths marginal deployment cost (compute/latency/engineering overhead)
Prompt-level instructions and static access control lists (ACLs) are limited special cases of a more general runtime policy-evaluation framework and cannot, in general, enforce path-dependent rules.
Formalization showing prompt/system messages and static ACLs map to restricted forms of the Policy(agent_id, partial_path, proposed_action, org_state) function; logical proof/argument in the paper and illustrative counterexamples.
high negative Runtime Governance for AI Agents: Policies on Paths ability to detect/enforce path-dependent policy violations (yes/no / coverage of...
LLM-based agent behavior is non-deterministic and path-dependent: an agent's safety/compliance risk depends on the entire execution path, not just the current prompt or single action.
Formal/abstract execution model defined in the paper (states, actions, execution paths) and conceptual arguments/illustrative examples showing how earlier states/actions affect later behavior; no large-scale empirical dataset reported.
high negative Runtime Governance for AI Agents: Policies on Paths path-dependent compliance/safety risk (probability of policy violation condition...
Qualitative case studies show modality-specific failures, such as correct entity recognition but wrong factual attribute.
Paper includes qualitative examples/case studies from the benchmark where models identify entities in images correctly but produce incorrect time-sensitive attributes (e.g., current officeholder or company status).
high negative V-DyKnow: A Dynamic Benchmark for Time-Sensitive Knowledge i... case-study examples of modality-specific failure modes
Real-world deployment will require representative data coverage and online adaptation despite the method’s robustness mechanisms.
Authors' discussion/limitations section: theoretical requirements for persistently exciting/representative trajectories for DeePC and recommendation for online adaptation and continual data collection for deployment.
high negative Data-driven generalized perimeter control: Zürich case study data representativeness and need for online adaptation (deployment readiness/ris...
Agent performance degrades markedly as environment complexity, stochasticity, and non-stationarity increase, revealing core limitations of current LLM-based agents for long-horizon, multi-factor decision problems.
Experimental results across progressively harder RetailBench environments showing performance falloff for multiple LLMs under increased task complexity and non-stationarity.
high negative RetailBench: Evaluating Long-Horizon Autonomous Decision-Mak... overall agent performance across increasing environment complexity (e.g., fulfil...
Behavioral memorization probe (TS‑Guessing) signaled memorization above chance for 72.5% of prompts across all models and items.
Experiment 3 — TS‑Guessing behavioral probe applied exhaustively to all 513 MMLU questions × six models (total prompts = 513×6); statistical thresholds used to classify above-chance memorization signals, yielding 72.5% of prompts flagged.
high negative Are Large Language Models Truly Smarter Than Humans? fraction of prompt-model pairs with statistically significant memorization signa...
Paraphrase / indirect-reference diagnostic: on a 100-question subset, average accuracy dropped by 7.0 percentage points under indirect referencing.
Experiment 2 — paraphrase/indirect-reference diagnostic applied to a 100-question subset of MMLU; measured delta between original and paraphrased question accuracy averaged to 7.0 percentage points.
high negative Are Large Language Models Truly Smarter Than Humans? mean accuracy drop (percentage points) under paraphrase/indirect prompts
STEM items show higher lexical contamination (18.1%) relative to the overall rate.
Category-level results from Experiment 1 (lexical matching) on the MMLU dataset (513 questions), aggregated by subject domain to compute an 18.1% contamination rate for STEM categories.
high negative Are Large Language Models Truly Smarter Than Humans? category-level contamination prevalence (STEM)
Overall lexical contamination: 13.8% of MMLU items show evidence of exposure in training data.
Experiment 1 — lexical contamination detection pipeline that searched model training–era public corpora and the open web for literal or near-literal occurrences of the 513 MMLU questions/answers; per-item contamination flags aggregated to produce the 13.8% figure.
high negative Are Large Language Models Truly Smarter Than Humans? contamination prevalence (fraction of benchmark items with lexical matches)
Public leaderboards overstate modern LLM capabilities because substantial portions of benchmark QA items appear in (or are memorized from) training data, inflating measured accuracy.
Multi-method contamination audit across six frontier LLMs (GPT-4o, GPT-4o-mini, DeepSeek-R1, DeepSeek-V3, Llama-3.3-70B, Qwen3-235B) evaluated on the MMLU benchmark (513 questions, 57 subjects), using lexical matching, paraphrase sensitivity, and behavioral memorization probes that together show systematic leakage.
high negative Are Large Language Models Truly Smarter Than Humans? inflation of measured benchmark accuracy / overstatement of model capability
None of the 13 systems report end-to-end evaluation on real quantum hardware (Layer 3b).
Systematic check of reported experiments for each of the 13 systems found no documented real-device, end-to-end hardware execution results (explicit Layer 3b reporting was absent).
high negative Generative AI for Quantum Circuits and Quantum Code: A Techn... presence/absence of real-device end-to-end hardware execution reporting
Proactive AI at national scale amplifies concerns around transparency, accountability, privacy, and potential misuse, necessitating robust regulatory and ethical frameworks.
Normative and ethical analysis in the paper, supported by general literature on large-scale AI governance; no empirical assessment of regulatory effectiveness in Russia included.
high negative DIGITAL TRANSFORMATION OF THE RUSSIAN FEDERATION’S SOCIOECON... risks to transparency, accountability, privacy and potential for misuse
Aggregating informal and recommendation data raises privacy and consent issues in low-regulation contexts, requiring governance safeguards.
Policy and ethical consideration based on the nature of the data used; no specific privacy-impact assessment reported in the summary.
high negative AI-Driven Skill Mapping and Gig Economy Matching Algorithm f... privacy risk / consent compliance
NLP/ML systems can inherit biases from inputs (underrepresentation, noisy self-reports, biased recommendations) and may therefore disadvantage some youth unless transparency and fairness constraints are implemented.
Reasoned risk assessment grounded in known properties of ML/NLP; the pilot summary does not report an audit or measured bias outcomes.
high negative AI-Driven Skill Mapping and Gig Economy Matching Algorithm f... bias in match outcomes / differential access by demographic group
There are limited randomized controlled trials or longitudinal evaluations; few studies measure patient-relevant outcomes or economic impacts.
Literature synthesis noting scarcity of RCTs and long-term observational studies, and absence of widespread patient-outcome and cost-effectiveness evaluations in existing publications.
high negative Human-AI interaction and collaboration in radiology: from co... number of RCTs/longitudinal studies, frequency of patient outcome and economic o...
Many published studies focus on standalone algorithm accuracy rather than clinician–AI joint performance in routine workflows.
Review of the literature categorizing study designs (preponderance of algorithm development/validation studies, fewer reader-in-the-loop, simulation, or deployment studies).
high negative Human-AI interaction and collaboration in radiology: from co... proportion of studies reporting standalone algorithm metrics versus those report...
Regulators and payers remain central bottlenecks—AI can accelerate discovery but cannot bypass clinical evidence requirements.
Policy discussion and regulatory analysis in the paper noting that approvals require clinical evidence independent of discovery modality.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... regulatory and payer requirements as constraints on the impact of AI-driven disc...
Downstream clinical development costs and translational failure rates remain the major drivers of total R&D expenditure; early-stage AI savings may not translate into proportionate increases in approved drugs.
Economic analysis and discussion in the paper referencing known cost distributions in drug development and historical attrition rates in clinical phases.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... contribution of clinical development costs and failure rates to total R&D expend...
Inherent biological complexity and translational gaps between in silico predictions, preclinical models, and human biology constrain downstream success rates.
Review of translational failures and literature cited in the paper demonstrating mismatch between preclinical signals and clinical outcomes; conceptual analysis of biological complexity.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... translational success rate from preclinical predictions to clinical efficacy
Gaps exist between computational designs and chemical/experimental feasibility (e.g., synthetic accessibility and assay readiness), limiting the usefulness of some generative outputs.
Case studies and critiques in the paper showing generated molecules that are synthetically infeasible or incompatible with experimental constraints; discussion of missing integration of practical constraints in many generative models.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... fraction of computationally designed molecules that are synthetically accessible...
Many models have limited interpretability and insufficient uncertainty quantification, hampering trust and decision-making.
Methodological analysis in the paper noting common deep-learning approaches lacking clear interpretability and uncertainty estimates; references to literature on model explainability and calibration gaps.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... degree of model interpretability and presence/quality of uncertainty quantificat...
Poor data quality, fragmentation, and limited accessibility reduce model reliability and generalizability.
Survey of data characteristics and limitations presented in the paper; examples of biased or sparse datasets and the paper's discussion of impacts on model performance and transferability.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... model reliability/generalizability as a function of data quality, coverage, and ...
AI remains an augmenting technology rather than a standalone solution: no AI-only originated drug has yet achieved regulatory approval.
Review of drug-approval records and company disclosures summarized in the paper; explicit statement that to date no entirely AI-originated molecule has received full regulatory approval.
high negative Has AI Reshaped Drug Discovery, or Is There Still a Long Way... regulatory approval status of AI-originated drug candidates (number of approvals...
Ethical and legal issues—patient privacy, algorithmic bias, intellectual property, and equitable access—pose risks to AI deployment in drug development.
Ethics and legal analyses, policy reports, and documented case examples collated in the review that identify these recurring concerns.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... ethical/legal risk incidence; privacy breaches; bias outcomes; access inequities
Regulatory uncertainty about validation standards and liability for AI tools raises investment risk and may slow deployment.
Regulatory and policy reports included in the narrative review describing evolving standards and open questions about validation, explainability, and liability for ML-based tools.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... regulatory clarity; investment risk and deployment timelines
Adoption of AI in drug R&D requires high upfront investment in data curation, compute infrastructure, and specialized talent.
Industry reports and economic analyses summarized in the review reporting capital and operational needs for building AI capabilities; qualitative synthesis rather than quantitative costing across firms.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... fixed upfront costs (data curation, compute, hiring/training)
Limited transparency and interpretability of many AI algorithms (black-box models) complicate clinical and regulatory trust and adoption.
Regulatory reports, methodological critiques, and case examples in the review highlighting interpretability concerns and their impact on clinical/regulatory acceptance.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... clinical/regulatory acceptance, trust, and adoption rates; explainability metric...
Performance of AI models in drug R&D depends on large, high-quality, and representative biomedical datasets; dataset bias or gaps substantially undermine model performance and generalizability.
Methodological literature and case studies cited in the review documenting failures or limited generalization when training data are biased, sparse, or non-representative; thematic synthesis rather than pooled quantification.
high negative From Algorithm to Medicine: AI in the Discovery and Developm... model performance/generalizability across populations and contexts
Predictions from AI depend on data quality and coverage and still require experimental (wet-lab) validation.
Discussion of early failures and limits in case studies and expert observations within the narrative review; methodological argument about dependence of ML models on input data.
high negative Learning from the successes and failures of early artificial... predictive validity of computational models / need for experimental validation
High-quality, standardized, interoperable data (clean, annotated, connected across modalities) is a critical limiting factor for translating AI capability into sustained impact.
Conceptual emphasis and domain knowledge argument in the editorial; no empirical measurement of data quality's causal effect included.
high negative AI as the Catalyst for a New Paradigm in Biomedical Research ability to translate AI capability into sustained impact (dependent on data qual...
The paper's evidence base is limited by early-stage projects with limited longitudinal outcome data and dependence on publicly available project information which may be incomplete or biased.
Methods and limitations explicitly stated in the paper (qualitative review; reliance on secondary sources; two case studies; absence of large-scale quantitative evaluation).
high negative Decentralized Autonomous Organizations in the Pharmaceutical... completeness and robustness of empirical evidence supporting claims about DAO ef...