The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (3492 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5877 claims
Human-AI Collaboration
5157 claims
Innovation
3492 claims
Org Design
3470 claims
Labor Markets
3224 claims
Skills & Training
2608 claims
Inequality
1835 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 736 1615
Governance & Regulation 664 329 160 99 1273
Organizational Efficiency 624 143 105 70 949
Technology Adoption Rate 502 176 98 78 861
Research Productivity 348 109 48 322 836
Output Quality 391 120 44 40 595
Firm Productivity 385 46 85 17 539
Decision Quality 275 143 62 34 521
AI Safety & Ethics 183 241 59 30 517
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 105 40 6 187
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 78 8 1 151
Regulatory Compliance 69 64 14 3 150
Training Effectiveness 81 15 13 18 129
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Innovation Remove filter
FederatedFactory reframes federated learning by exchanging generative modules (priors) instead of exchanging discriminative model weights.
Methodological description in the paper: design of FederatedFactory where each client trains/contributes generative modules (class-specific priors) and shares those modules rather than classifier weights. Evidence is the described protocol and experiments that implement that protocol on the reported datasets.
medium positive FederatedFactory: Generative One-Shot Learning for Extremely... unit of federation / protocol (generative modules vs. discriminative weights)
There is an economic case for funding access to quantum hardware, standardized benchmarking infrastructure, and shared datasets to reduce deployment uncertainty and enable credible claims of usefulness.
Policy and R&D recommendation inferred from the review's finding of heterogeneous benchmarking and missing hardware tests; argued as a mitigation to the identified deployment gap.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... recommendation for funding/hardware access and standardized benchmarking
Most of the surveyed systems address semantic correctness (Layer 2) to some degree.
The review's application of Layer 2 found that a majority of the 13 systems include semantic-level evaluations (e.g., unitary equivalence tests, functional tests, simulator-based correctness checks), though the depth varied.
medium positive Generative AI for Quantum Circuits and Quantum Code: A Techn... presence and extent of semantic-correctness evaluation
Policies improving data sharing, standardization, and model transparency would increase overall welfare by reducing duplication and improving model performance.
Policy argumentation in the paper drawing on economic theory and examples where shared datasets/standards improved research productivity.
medium positive Has AI Reshaped Drug Discovery, or Is There Still a Long Way... research productivity and welfare as affected by data-sharing, standardization, ...
Organizations that tightly integrate AI teams with experimental groups achieve higher productivity.
Case studies and internal metrics cited in the paper showing improved throughput and candidate progression in integrated teams versus siloed approaches.
medium positive Has AI Reshaped Drug Discovery, or Is There Still a Long Way... organizational productivity (throughput, candidate progression) as a function of...
Value accrues to firms that control high-quality data, integrated platforms, and wet-lab validation—data and experimental capacity are strategic assets.
Market and organizational analysis in the paper citing examples of firms leveraging proprietary data/platforms and wet-lab capabilities to advance candidates more effectively.
medium positive Has AI Reshaped Drug Discovery, or Is There Still a Long Way... firm success/value correlated with possession of high-quality data, integrated p...
AI reduces time and cost in early-stage discovery (discovery-to-candidate), lowering per-candidate screening and design costs.
Reported case studies and cost/time comparisons in the paper showing faster candidate identification and reduced experimental burden in early stages; aggregated industry claims summarized.
medium positive Has AI Reshaped Drug Discovery, or Is There Still a Long Way... time and monetary cost from discovery to candidate selection; per-candidate scre...
Several AI-guided molecules have entered clinical trials and show encouraging early-phase indicators.
Industry reports and trial registries summarized in the paper reporting multiple AI-guided programs reaching Phase I/II; company disclosures and early-phase biomarker or safety readouts referenced.
medium positive Has AI Reshaped Drug Discovery, or Is There Still a Long Way... number of AI-guided molecules entering clinical trials and their early-phase cli...
Recommendations for policy include investing in public data infrastructure and standards, promoting regulatory clarity for AI validation, and supporting equitable access to AI-driven innovations.
Policy recommendations derived from synthesis of challenges and potential remedies presented in the narrative review; based on conceptual policy analysis and examples rather than empirical testing of interventions.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... policy adoption (infrastructure, standards); measures of equitable access and re...
Policies that incentivize interoperable, privacy-preserving data sharing (e.g., federated data, common standards) can reduce entry barriers and improve social returns from AI in drug R&D.
Policy analysis and recommendations from the review, supported by conceptual arguments and examples of federated/privacy-preserving platforms; limited empirical validation of large-scale impact.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... data-sharing uptake; entry barriers; measures of social return (access, innovati...
AI has the potential to raise R&D productivity by shortening timelines and reducing certain failure modes, thereby increasing the net present value (NPV) of successful drug projects.
Economic reasoning and projections based on documented process improvements in the reviewed studies and reports; not validated by longitudinal, generalized financial analyses in the literature.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... R&D productivity metrics (time, success probability) and financial outcomes (NPV...
AI enhances post-market safety signal detection using real-world data analytics.
Industry and regulatory reports and published studies in the review documenting improved detection or earlier identification of safety signals in pharmacovigilance applications using ML on real-world datasets.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... sensitivity/timeliness of safety signal detection; false positive/negative rates...
AI-enabled adaptive and enrichment trial designs increase trial efficiency and statistical power.
Methodological studies, clinical-trial case studies, and regulatory guidance summarized in the review showing applications of ML to adaptive/enrichment designs; evidence mainly illustrative and context-specific.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... trial efficiency metrics (sample size, duration, cost) and statistical power or ...
AI improves predictive toxicity and ADMET models, which can reduce late-stage failures.
Multiple empirical studies and industry case reports aggregated in the narrative review demonstrating improved in silico toxicity/ADMET prediction performance in specific settings; heterogeneity across datasets and endpoints; not a formal meta-analysis.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... predictive accuracy of toxicity/ADMET models; late-stage failure rates
AI can reduce time-to-market and lower some drug development costs.
Synthesis of case studies, industry reports, and empirical studies reported in the narrative review that document examples of compressed timelines and cost savings in parts of the pipeline; review notes lack of long-run, generalized ROI estimates.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... time-to-market; development costs (component-level, not comprehensive program-le...
AI is materially accelerating discovery and development steps in pharmaceutical R&D, improving target identification, lead optimization, safety prediction, and adaptive trial design.
Narrative review synthesizing published studies, review articles, industry and regulatory reports; evidence primarily consists of empirical studies and case studies covering preclinical and clinical-stage applications. No pooled quantitative meta-analysis; heterogeneous methods and therapeutic areas.
medium positive From Algorithm to Medicine: AI in the Discovery and Developm... discovery and development timeline (time-to-market); stage-specific process metr...
Firms with superior proprietary data and integration capability gain competitive advantage, increasing firm-level heterogeneity in AI returns.
Narrative analysis of market structure implications and examples; no cross-firm empirical heterogeneity study included.
medium positive Learning from the successes and failures of early artificial... differential R&D productivity / market performance across firms
Returns to complementary investments (data infrastructure, experiment automation, cross-disciplinary teams) increase as AI becomes more central to discovery workflows.
Synthesis of adoption lessons and case examples emphasizing complementary capital; no quantitative ROI estimates provided.
medium positive Learning from the successes and failures of early artificial... incremental R&D productivity attributable to complementary investments
Embedding AI into organizational processes, decision-making, and wet-lab validation is crucial to capturing its value.
Narrative review of adoption and integration lessons from large biopharma experience and illustrative case studies.
medium positive Learning from the successes and failures of early artificial... realized R&D productivity gains attributable to AI integration
Successful AI adoption requires investment in data, talent, and workflows rather than reliance on bolt-on point solutions.
Thematic analysis of adoption-level lessons and industry case examples indicating organizational and infrastructural requirements for realized value.
medium positive Learning from the successes and failures of early artificial... likelihood of successful AI-driven productivity gains / ROI from AI initiatives
AI has produced genuine early-stage breakthroughs in drug discovery, accelerating hit identification and early design cycles.
Narrative expert synthesis and thematic analysis of industry experience over the first decade of AI adoption, illustrated by early-case successes and firm-reported accelerations; no new primary experimental data or causal econometric estimates provided.
medium positive Learning from the successes and failures of early artificial... time-to-hit / hit identification rate / iteration cycle time in early discovery
Public policies that lower frictions for secure data sharing, standardize validation metrics, and support workforce retraining can accelerate beneficial diffusion of AI while managing risks.
Policy recommendation based on the paper's synthesis of enablers and constraints; not empirically tested within the paper.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research speed and equity of AI diffusion and risk management
AI has the potential to reduce marginal cost and time per candidate (shorter design loops, in silico screening), increasing effective productivity of R&D spend if improvements are validated.
Theoretical and conceptual argument referencing capabilities of generative models and simulation; paper states no new quantitative estimates were produced.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research marginal cost per candidate, time per candidate, R&D productivity
Workforce upskilling and new roles (e.g., ML engineers embedded in biology teams, AI product managers) are required for effective AI integration in pharma R&D.
Descriptive projection based on observed industry hiring trends and organizational needs; no workforce survey data provided.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research availability of AI-skilled workforce and role integration
Cloud/federated approaches reduce upfront infrastructure investments and facilitate distributed collaboration.
Conceptual argument based on cloud economics and federated architectures; no quantitative cost-savings or collaboration metrics presented.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research upfront infrastructure investment and degree of distributed collaboration
Cloud and federated approaches enable access to powerful pre-trained or fine-tunable models while allowing proprietary data to remain controlled (privacy-preserving sharing and model-to-data patterns).
Technological synthesis and examples of federated learning and cloud-hosted ML patterns; no empirical performance or privacy-utility tradeoff measurements reported.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research access to models, data control/privacy preservation, infrastructure investment n...
Startups can leverage pre-trained models, cloud compute, and hosted toolchains to compete on speed and niche innovation against larger incumbents.
Conceptual observation and illustrative examples; not supported by systematic comparison of startup vs incumbent performance metrics in the paper.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research startup competitive speed and niche innovation capability
AI lowers entry costs for smaller biotech by enabling faster molecular design, simulation, and iteration, allowing earlier translation to clinical stages.
Argument grounded in current capabilities (pre-trained models, cloud compute) and illustrative startup examples; no empirical cost or time-to-clinic data provided.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research entry costs, speed of molecular design, time to clinical translation
Production-first democratization builds user-friendly, productionized AI tools that non-specialists can use, decentralizing model use and accelerating throughput.
Narrative examples and conceptual reasoning in the editorial; lacks systematic evaluation of throughput gains or decentralization effects.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research tool adoption by non-specialists, throughput (e.g., number of tasks/candidates p...
Culture-centric transformation embeds AI into everyday scientific and operational decisions and requires organizational change, incentives, and cross-functional workflows.
Conceptual argument and organizational theory applied in the editorial; no empirical measurement of organizational change or success rates provided.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research degree of AI integration into decision-making and organizational change requirem...
Partnership-driven acceleration lets pharma access AI capabilities rapidly via alliances with AI/tech firms while allowing pharma to preserve focus on core drug expertise and outsource model or platform development.
Qualitative description and illustrative examples in the editorial; not supported by systematic case study data or quantified outcomes.
medium positive AI as the Catalyst for a New Paradigm in Biomedical Research speed of capability acquisition and preservation of core focus
DAOs enable distributed collaboration among scientists, patients, and funders to prioritize projects and share results.
Stakeholder mapping and qualitative case descriptions indicating multi-stakeholder participation in DAO projects; no quantitative cross-stakeholder collaboration metrics provided.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... frequency and scope of cross-stakeholder collaborations, project prioritization ...
DAOs can incentivize contribution with token rewards, milestone-based disbursements, and revenue-sharing/licensing arrangements.
Review of DAO reward and tokenomic mechanisms in the literature and case examples; conceptual synthesis rather than empirical testing of incentive effectiveness.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... contributor engagement levels, completion rates of milestones, distribution of l...
DAOs democratize decision-making through on-chain voting and reputation systems (example: VitaDAO).
Case-study description of VitaDAO governance structure using on-chain voting and reputation mechanisms documented in public governance records and whitepapers.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... on-chain voting participation rates, distribution of decision power, number of c...
DAOs can pool capital via tokenized funding and fractionalized IP ownership (example: Molecule).
Case-study description and documentation of Molecule's marketplace and tokenization mechanisms from public sources; demonstration of mechanisms rather than measured financing outcomes at scale.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... amount of capital pooled via tokens, number/extent of fractionalized IP ownershi...
Early case studies (VitaDAO, Molecule) demonstrate proof-of-concept for tokenized fundraising, collaborative decision-making, and open-science IP models.
Comparative qualitative case-study descriptions based on public documentation, whitepapers, and governance records for two projects (VitaDAO and Molecule); no controlled or longitudinal outcome metrics reported.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... tokenized fundraising activity (tokens sold/raised), existence and use of collab...
Decentralized Autonomous Organizations (DAOs) present a viable alternative governance and financing model for the pharmaceutical industry that can reduce frictions in drug discovery and development, increase stakeholder participation (scientists, patients, funders, regulators), and accelerate innovation.
Conceptual/review analysis synthesizing literature on DAOs and decentralized science plus comparative case-study analysis of two early projects (VitaDAO and Molecule); no original empirical trials or large-N quantitative evaluation.
medium positive Decentralized Autonomous Organizations in the Pharmaceutical... coordination/friction in R&D processes; stakeholder participation (contributor c...
Regulators should anticipate new forms of intangible capital and data monopolies arising from sensory models and consider standards for data interoperability, public datasets/models, and workforce retraining.
Policy recommendation based on foresight and literature on data governance and platform regulation; no empirical regulatory impact analysis provided.
medium positive At the table with Wittgenstein: How language shapes taste an... policy readiness: existence/adoption of interoperability standards, public senso...
Economics of AI in food must incorporate non-price metrics (perceptual quality, cultural fit) and design ways to monetize and protect sensory intellectual property (trade secrets, data governance).
Normative policy and methodological recommendation derived from literature synthesis and conceptual analysis; not validated with empirical economic valuation studies.
medium positive At the table with Wittgenstein: How language shapes taste an... inclusion of perceptual/cultural metrics in economic valuation and uptake of sen...
Interdisciplinary approaches (cognitive science, behavioral economics, design thinking) are necessary to capture the social, perceptual, and cultural dimensions of food experience.
Normative argument supported by literature synthesis across relevant disciplines; no experimental comparison of mono- vs interdisciplinary approaches provided.
medium positive At the table with Wittgenstein: How language shapes taste an... completeness/adequacy of models for social, perceptual, and cultural aspects of ...
Treating food as a soft-matter system centered on rheology provides a bridge from molecular/structural properties to macroscopic sensory experience.
Conceptual and theoretical argument grounded in soft-matter science and rheology literature; interdisciplinary literature synthesis; no new empirical data or experiments reported.
medium positive At the table with Wittgenstein: How language shapes taste an... ability to link molecular/structural properties to perceived texture and sensory...
Automated closed-loop discovery amplifies the practical impact of predictive-model improvements by converting them into realized experimental throughput, yielding greater productivity gains than prediction improvement alone.
Synthesis of reviewed closed-loop and automation studies illustrating how model-driven acquisition functions coupled to robotics accelerate validation; conceptual evidence from literature (no new experiments).
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... experimental throughput, number of validated discoveries per unit time, realized...
Evaluation metrics for materials-AI pipelines should include calibration, robustness, and deployability (not just predictive accuracy) to better gauge practical utility.
Recommendation grounded in the review's identification of calibration and robustness as core bottlenecks and survey of uncertainty/interpretability methods.
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... evaluation metric suite adoption and correlation with real-world deployment succ...
To realize practical AI-accelerated materials discovery, the field must shift research priorities from solely maximizing predictive accuracy to ensuring robustness, uncertainty calibration, interpretability, and integration with lab workflows.
Argument and synthesis based on survey of shortcomings in current literature (data scarcity, calibration, interpretability, lack of lab integration) and proposed remedies; recommendation not empirically tested in this paper.
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... deployability and robustness of materials-AI pipelines (operational success meas...
Integration of predictive models with automated experimentation (robotic labs) to form closed-loop active-learning discovery systems can rapidly validate predictions and significantly increase experimental throughput.
Synthesis of papers and demonstration systems combining model-driven acquisition with automated synthesis/characterization; conceptual and empirical examples from reviewed literature (paper does not present new closed-loop experiments).
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... experimental cycle time, validation rate, and experimental throughput in closed-...
Deep learning is well suited for end-to-end generative models (variational autoencoders, generative adversarial networks, reinforcement learning) enabling inverse design of materials that meet specified property targets.
Survey of generative-model applications in materials design literature included in the review; conceptual and empirical examples drawn from prior work (no new generative experiments in this paper).
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... quality and property-conformance of generated candidate materials (success rate ...
Deep learning models often achieve superior predictive performance in many materials tasks compared to traditional ML that relies on manual feature engineering.
Comparative evaluations surveyed in the review showing performance gains for GNNs and equivariant networks over hand-crafted descriptors in multiple empirical studies (review-level synthesis; no new benchmarks run).
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... predictive accuracy / error metrics on materials property prediction tasks
Deep learning enables end-to-end structure→property mapping (from atomic structure to macroscopic properties), moving beyond manual feature-based prediction and enabling faster forward screening and more powerful inverse design.
Synthesis of the reviewed literature comparing traditional feature-engineered ML with deep learning approaches (graph neural networks, convolutional and equivariant networks, and generative models). No new experimental data; evidence drawn from multiple empirical and methodological papers surveyed in the review.
medium positive Machine Learning-Driven R&D of Perovskites and Spinels: From... ability to predict or generate materials with target properties and screening th...
Firms can differentiate via domain expertise and partnerships with ecological institutions, and funders should prioritize interdisciplinary teams, long‑term monitoring projects, and data infrastructure to unlock high social returns.
Strategic-implications recommendation drawn from the collection's examples of successful partnerships and long-term data needs (policy/strategy recommendation from synthesis).
medium positive Towards ‘digital ecology’: Advances in integrating artificia... firm competitive advantage and funding impact on social returns
AI advances that improve monitoring and policy implementation generate positive externalities because biodiversity and ecosystem services are public goods, reinforcing the case for subsidized or open‑source solutions.
Externalities/public-goods argument linking technical potential in the collection to economic characteristics of biodiversity (theoretical economic argument supported by examples of public-benefit applications).
medium positive Towards ‘digital ecology’: Advances in integrating artificia... magnitude of positive externalities and justification for subsidized/open-source...