The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (2215 claims)

Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 369 105 58 432 972
Governance & Regulation 365 171 113 54 713
Research Productivity 229 95 33 294 655
Organizational Efficiency 354 82 58 34 531
Technology Adoption Rate 277 115 63 27 486
Firm Productivity 273 33 68 10 389
AI Safety & Ethics 112 177 43 24 358
Output Quality 228 61 23 25 337
Market Structure 105 118 81 14 323
Decision Quality 154 68 33 17 275
Employment Level 68 32 74 8 184
Fiscal & Macroeconomic 74 52 32 21 183
Skill Acquisition 85 31 38 9 163
Firm Revenue 96 30 22 148
Innovation Output 100 11 20 11 143
Consumer Welfare 66 29 35 7 137
Regulatory Compliance 51 61 13 3 128
Inequality Measures 24 66 31 4 125
Task Allocation 64 6 28 6 104
Error Rate 42 47 6 95
Training Effectiveness 55 12 10 16 93
Worker Satisfaction 42 32 11 6 91
Task Completion Time 71 5 3 1 80
Wages & Compensation 38 13 19 4 74
Team Performance 41 8 15 7 72
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 17 15 9 5 46
Job Displacement 5 28 12 45
Social Protection 18 8 6 1 33
Developer Productivity 25 1 2 1 29
Worker Turnover 10 12 3 25
Creative Output 15 5 3 1 24
Skill Obsolescence 3 18 2 23
Labor Share of Income 7 4 9 20
Clear
Innovation Remove filter
Demand for security engineers, privacy specialists, human moderators, and behavioral scientists will rise, increasing wages in these specialties and altering labor allocations in AI/VR firms.
Authors' labor‑market inference drawn from increased needs implied by TVR‑Sec implementation and literature on moderation/security demand; no labor‑market data or forecasts provided.
low positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... labor demand and wage pressure in security/privacy/safety roles (projected, not ...
Platforms that credibly offer strong privacy and socio‑behavioral protections may capture user trust and monetization opportunities (e.g., enterprise, healthcare, education), making safety features a potential competitive differentiator.
Authors' market‑structure reasoning based on synthesized literature and economic theory; no empirical adoption or revenue data provided to validate this claim.
low positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... user trust and monetization/revenue gains tied to privacy/safety features (specu...
Harmonized international norms and transparency measures would reduce transaction costs, limit market fragmentation, and lower the likelihood of destabilizing arms‑race dynamics, thereby improving the environment for cross‑border investment and trade in AI.
Authors' normative/economic argumentation based on comparative findings; proposed as a policy implication rather than an empirically validated result.
low positive <b>Regulating AI in National Security: A Comparative S... transaction costs, market fragmentation, arms‑race likelihood, and cross‑border ...
Aligning domestic rules with international risk‑mitigation norms, increasing transparency in defence procurement/AI operations, and strengthening multilateral confidence measures would reduce escalation and abuse.
Authors' policy argumentation and normative reasoning based on comparative findings (not empirically tested in the paper).
low positive <b>Regulating AI in National Security: A Comparative S... likelihood of escalation, misuse, or abusive applications of military/dual‑use A...
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
Normative and conceptual claim from the workshop's economics discussion and design provocations; not empirically evaluated within the workshop summary.
low positive Moving Beyond Clicks: Rethinking Consent and User Control in... data market liquidity and pricing structures
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
Workshop recommendation emerging from co-design exercises, futures scenarios, and position papers; presented as a normative/design agenda rather than an empirically tested intervention.
low positive Moving Beyond Clicks: Rethinking Consent and User Control in... feasibility/effectiveness of alternative consent modalities (delegation, prefere...
THETA outputs can be used to create domain-tailored textual covariates (e.g., narrative indices, topic intensity) for regressions or forecasting, provided researchers validate outputs with human coding and sensitivity checks.
Practical recommendation and implication for economists in the discussion; not an empirical claim directly tested in the reported experiments.
low positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... usability of THETA-derived topic indices as covariates in econometric models
THETA can surface domain-specific frames, stakeholder positions, and emergent arguments from large comment corpora or filings, assisting policy and regulatory analysis.
Stated implication and example applications (regulatory comment corpora, filings); no direct case-study results or downstream policy-analytic validations included in the summary.
low positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... ability to extract domain-specific frames and stakeholder arguments
THETA's DAFT plus the agent workflow reduces the marginal cost of coding and classification, making large-N qualitative analysis more feasible.
Argued implication based on use of parameter-efficient LoRA and human-in-the-loop agent design; no cost analyses, time studies, or economic comparisons provided in the summary.
low positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... marginal cost / feasibility of scaling qualitative coding
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
low positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... payer reimbursement approvals, value-based contract adoption, and pricing model ...
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
low positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... information asymmetry indicators, commercialization/regulatory risk measures, fi...
Using CFR avoids the computational and development costs of retraining T2I models to improve color fidelity, providing a lower-cost path to better color authenticity.
Paper emphasizes CFR is training-free and applies at inference, claiming improved color authenticity without model retraining; cost implication is inferred from lack of retraining (quantitative compute savings not provided in the summary).
low positive Too Vivid to Be Real? Benchmarking and Calibrating Generativ... compute/development cost required to improve color fidelity (inference-only CFR ...
Once trained, these simulation-trained summary networks are fast to evaluate and can be used as amortized estimators to enable large-scale counterfactuals, sensitivity analyses, and Monte Carlo-based policy evaluation with much lower per-evaluation cost.
Practical implication claim: based on amortization principle (neural network inference is fast at evaluation time) and reported ability to replace repeated runs of iterative algorithms; the summary asserts reduced per-evaluation cost but does not provide quantitative runtime benchmarks or speedup ratios in the provided text.
low positive ForwardFlow: Simulation only statistical inference using dee... per-evaluation runtime / computational cost (claimed reduction; not quantitative...
Surrogate-accelerated workflows reduce energy consumption and carbon footprint per discovery because they require fewer expensive evaluations.
Stated implication in the paper linking fewer expensive quantum-chemistry/DFT evaluations to lower energy use; no measured energy/emissions data provided in the summary.
low positive Bayesian Optimization with Gaussian Processes to Accelerate ... energy consumption / CO2 emissions per simulated problem (projected)
Order-of-magnitude reductions in expensive evaluations enable faster R&D cycles and higher throughput for exploration of potential-energy landscapes in materials science, catalysis, and drug design.
Policy/economic implication argued in the paper based on empirical reductions in expensive evaluations; no direct time-to-discovery experiments reported in the summary.
low positive Bayesian Optimization with Gaussian Processes to Accelerate ... time-to-solution / throughput in R&D workflows (projected)
QCSC capabilities could change the economics of certain AI model classes that rely on expensive scientific simulations for training data by producing richer, cheaper training datasets.
Theoretical link between simulation output quality/cost and training-data generation for physics-informed ML and generative chemistry models; no empirical studies or cost estimates presented.
low positive Reference Architecture of a Quantum-Centric Supercomputer cost and quality of training datasets for simulation-dependent AI models, downst...
QCSC-enabled faster, higher-fidelity simulation can compress R&D cycles in chemistry and materials, lowering time-to-discovery and increasing returns to computational investment for firms.
Use-case analysis linking simulation fidelity/turnaround to R&D timelines; relies on assumed speedups and fidelity improvements but provides no measured speedup data.
low positive Reference Architecture of a Quantum-Centric Supercomputer R&D cycle time (time-to-discovery), cost per discovery, returns to computational...
Adopting DPS-like efficiencies reduces the marginal compute cost of online prompt-selection workflows (dominated by rollouts), thereby shortening finetuning cycles and increasing developer productivity.
Paper's implications section: logical inference from reported reduction in rollouts and rollout compute; not an empirical market study—no dollar or industry-scale numbers provided.
low positive Dynamics-Predictive Sampling for Active RL Finetuning of Lar... marginal compute cost of RL finetuning; finetuning cycle time; developer product...
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
low positive Next-Generation Financial Analytics Frameworks for AI-Enable... error rates, decision latency, automation rate (tasks automated), model performa...
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
low positive ChatGPT as an Innovative Tool for Idea Generation and Proble... market concentration and network effects for ideation platforms
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
low positive ChatGPT as an Innovative Tool for Idea Generation and Proble... firm-level returns, market share, and competitive advantage
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
low positive ChatGPT as an Innovative Tool for Idea Generation and Proble... demand for evaluative/curation skills; wage premia for such skills (not directly...
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
low positive ChatGPT as an Innovative Tool for Idea Generation and Proble... time-to-prototype; search costs; firm-level innovation cycle length (largely unm...
Targeted subsidies or support for SMEs to access SECaaS could accelerate secure AI adoption where scale barriers exist.
Economic rationale and proposed field-experiment designs; no empirical trial results presented in the chapter.
low positive Security- as- a- service: enhancing cloud security through m... SME SECaaS adoption rates, AI adoption by SMEs
Clarifying liability and the shared responsibility model will better align incentives between providers and customers and improve security outcomes.
Policy and legal analysis; case studies of incidents where unclear responsibilities hampered response; recommended as an intervention rather than proven by causal evidence.
low positive Security- as- a- service: enhancing cloud security through m... alignment of incentives, incident response effectiveness, legal clarity
Promoting interoperable standards and certification can reduce lock-in and lower search costs for buyers, fostering competition in SECaaS markets.
Policy recommendation grounded in market-design theory and analogies to other standardization efforts; supporting case studies from other technology markets suggested but not empirically established here.
low positive Security- as- a- service: enhancing cloud security through m... buyer switching costs, market competition indicators
Open, linked phenomic–genomic datasets could inform policy and conservation markets (e.g., biodiversity credits) by improving monitoring and trait-based risk assessment models.
Policy implication advanced in the discussion; presented as potential application rather than demonstrated outcome.
low positive High-throughput phenomics of global ant biodiversity potential influence on policy and conservation market analytics (projected)
Paired phenome–genome data increases the scientific and commercial value of the dataset for models predicting phenotype from genotype and vice versa.
Analytical argument in the implications section; no empirical demonstrations in the paper of improved model performance using these pairings.
low positive High-throughput phenomics of global ant biodiversity value for phenotype–genotype predictive modeling (projected)
Open, standardized 3D phenomic datasets reduce the need for individual labs/companies to finance expensive scanning campaigns and democratize access for academic groups and startups.
Argument in the paper's implications section based on the public release of a large standardized dataset; not an empirically tested economic outcome in the study.
low positive High-throughput phenomics of global ant biodiversity reduction in data-acquisition costs/barriers for downstream users (projected)
Demand would grow for liability insurance tailored to EdTech, third‑party audits, fairness certifications, and specialized legal advisory services; these markets would affect costs and differential competitiveness.
Predictive market analysis and policy reasoning (no survey or market data presented).
low positive Civil Rights and the EdTech Revolution size/growth of insurance and certification markets and effect on vendor costs/co...
Stricter legal exposure may slow some risky experimentation but encourage investment in fairness testing, robust evaluation, and explainability tools — potentially increasing the quality and trustworthiness of deployed AI in education.
Normative economic argumentation about incentives for R&D and testing; no empirical measurement of innovation rates provided.
low positive Civil Rights and the EdTech Revolution innovation behavior (risk‑taking vs. investment in fairness/testing) and resulti...
The method can identify frontier topics and cross-field convergence (e.g., methods migrating from NLP to vision) to inform assessments of comparative advantage and specialization across institutions/countries.
Proposed implication: using topic maps and cluster dynamics to detect frontier topics and cross-field migration; no concrete empirical examples or validation presented in summary beyond general mapping claim on ICML/ACL abstracts.
low positive Soft-Prompted Semantic Normalization for Unsupervised Analys... detection of frontier topics and cross-field convergence
The approach is scalable and model-agnostic: different LLMs and embedding models can be swapped into the pipeline without changing the overall method.
Claimed design property in the paper summary (asserted ability to substitute different LLMs/embedding models). No detailed cross-model robustness experiments or scalability benchmarks provided in the summary.
low positive Soft-Prompted Semantic Normalization for Unsupervised Analys... pipeline compatibility across different LLMs/embedding models and computational ...
AI should serve precision and purpose in public policy — improving foresight, enabling better trade-offs, and preserving democratic accountability.
Normative policy prescription and conceptual argumentation in the book; no empirical testing or quantified outcomes reported.
low positive Governing The Future policy foresight quality, decision trade-off management, and preservation of dem...
AI-driven systems should empower people with knowledge and pathways to participate in global markets rather than concentrate gains.
Normative recommendation derived from policy analysis and value judgments in the book; not supported by empirical evidence in the blurb.
low positive Governing The Future distribution of economic gains and levels of participation in global markets
Algorithmic transparency and auditability can reduce systemic risk from opaque automated lending decisions and improve regulator oversight and macroprudential policy.
Conceptual/systemic-risk argument in the "Systemic risk & governance externalities" section; no empirical systemic-risk analysis provided.
low positive Diego Saucedo Portillo Sauceport Research systemic risk indicators related to automated lending (e.g., correlated default ...
Improved algorithmic transparency could reduce information asymmetries, lowering adverse selection and moral hazard over time and potentially expanding credit to underserved populations.
Conceptual economic argument in the "Credit allocation & pricing" section; based on theory rather than empirical testing.
low positive Diego Saucedo Portillo Sauceport Research levels of information asymmetry, incidence of adverse selection/moral hazard, an...
If properly designed and enforced, the protocol measures can improve credit access for underserved populations and reduce biased exclusion, supporting inclusive growth.
Normative claim supported by doctrinal arguments, comparative regulatory literature and technical fairness literature synthesized in the audit (no controlled empirical evaluation reported).
low positive Diego Saucedo Portillo Sauceport Research credit access for underserved populations; incidence of biased exclusion
VIS can be integrated into macro/meso AI-economics models (input–output general equilibrium, growth models) to capture embodied labor and capital effects and to enable counterfactual analysis of AI diffusion scenarios.
Authors propose methodological extensions and modeling directions that embed VIS-style accounting into larger economic models for scenario analysis (conceptual suggestion).
low positive Measuring labor productivity dynamics in U.S. industrial and... feasibility of integrating VIS into macro/meso models for counterfactual AI diff...
VIS metrics can inform policy decisions (workforce retraining, sectoral subsidies, taxation) by revealing where AI-induced productivity changes will propagate through supply chains.
Authors argue policy relevance based on VIS’s ability to map upstream/downstream labor effects; presented as an implication rather than empirically validated policy outcomes.
low positive Measuring labor productivity dynamics in U.S. industrial and... policy-relevant insights on propagation of productivity changes across supply ch...
VIS-based measures can improve measurement of AI’s productivity impacts by better capturing indirect labor displacement or augmentation from AI-driven automation across supply chains.
Conceptual extension: VIS framework captures indirect labor effects that would matter when assessing AI-driven automation impacts; not empirically tested for AI within the paper.
low positive Measuring labor productivity dynamics in U.S. industrial and... comprehensiveness/accuracy of measured AI-induced labor productivity changes (di...
By synthesizing computer science, engineering, and financial policy insights, DRL should be viewed not merely as a mathematical tool but as a transformative agent within the global socio-technical infrastructure of capital markets.
High-level synthesis and interdisciplinary argumentation in the paper; no empirical evidence or longitudinal studies are cited in the excerpt to demonstrate systemic transformation.
low speculative Deep Reinforcement Learning for Dynamic Portfolio Optimizati... transformative impact on socio-technical structures of capital markets (institut...
Modular and cell‑free platforms could enable decentralized, localized manufacturing of specialty compounds, potentially altering trade flows away from centralized petrochemical hubs.
Conceptual synthesis plus small-scale demonstrations of modular/cell-free units in the reviewed literature; limited pilot projects and discussion of potential scalability and portability.
low speculative Harnessing Microbial Factories: Biotechnology at the Edge of... feasibility metrics for localized production (unit throughput, cost per unit at ...
Lower data and compute requirements could decentralize innovation (reducing incumbent advantages tied to massive compute/data), but the complexity of embodied systems and real-world testing could create new specialized incumbents (robotics platforms, simulation providers).
Market-structure hypothesis based on trade-offs between resource needs and platform value; speculative and not empirically tested in the paper.
speculative mixed Why AI systems don't learn and what to do about it: Lessons ... market concentration metrics; emergence of specialized incumbents; level of dece...
Proprietary, high-quality surrogate models could create competitive advantage and barriers to entry, whereas open-source surrogates would democratize access.
This is an implication/policy argument in the paper's discussion about IP and market effects; it is a theoretical/qualitative claim rather than an empirical result from the experiments.
speculative mixed Deep Learning-Driven Black-Box Doherty Power Amplifier with ... market competitive advantage / barriers to entry arising from control of surroga...
Improved throughput and lower travel costs can induce additional travel demand (rebound), partially offsetting congestion/emissions gains unless paired with demand-management measures.
Theoretical economic reasoning presented in the paper as a caveat; not directly measured in the simulation experiments (no induced-demand dynamic experiments reported).
speculative mixed Data-driven generalized perimeter control: Zürich case study net congestion and emissions accounting for possible induced travel demand
Pretraining on diverse temporal resolutions increases upfront costs (data acquisition, storage, compute) but can raise model generalization and reduce downstream retraining costs, improving ROI for platform providers.
Paper discusses trade-offs in AI economics, claiming broader pretraining raises costs but yields returns through better generalization and lower adaptation cost. This is a theoretical/cost–benefit argument rather than an empirical finding reported in the summary.
speculative mixed Bridging the High-Frequency Data Gap: A Millisecond-Resoluti... trade-off between upfront pretraining costs and downstream retraining costs / mo...
Organizational heterogeneity in strategic backing and mentoring explains variation in benefits from AI adoption across firms and sectors, contributing to cross-firm productivity dispersion.
Theoretical claim linking organizational moderators to heterogeneous adoption outcomes; proposed as an empirical research direction without data provided.
speculative mixed Revolutionizing Human Resource Development: A Theoretical Fr... heterogeneity in firm-level AI productivity gains; cross-firm productivity dispe...
Managerial and peer mentoring styles (e.g., directive vs. developmental mentoring) influence how affordances are perceived and actualized, affecting learning, trust, and task allocation in human–AI collaboration.
Theoretical argument drawing on mentoring and organizational behavior literatures integrated with AST/AAT; no empirical tests or sample presented.
speculative mixed Revolutionizing Human Resource Development: A Theoretical Fr... learning outcomes, trust in AI/human–AI teams, task allocation decisions
Large fixed costs to build standardized databases and automated laboratories imply economies of scale that can favor well-capitalized firms and centralized public infrastructures, potentially increasing barriers to entry.
Economic analysis and reasoning in the implications section drawing on the costs of data/infrastructure discussed in the reviewed literature; not empirically measured in the paper.
speculative mixed Machine Learning-Driven R&D of Perovskites and Spinels: From... market concentration, barriers to entry, degree of centralization in materials d...