Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
Demand for security engineers, privacy specialists, human moderators, and behavioral scientists will rise, increasing wages in these specialties and altering labor allocations in AI/VR firms.
Authors' labor‑market inference drawn from increased needs implied by TVR‑Sec implementation and literature on moderation/security demand; no labor‑market data or forecasts provided.
Platforms that credibly offer strong privacy and socio‑behavioral protections may capture user trust and monetization opportunities (e.g., enterprise, healthcare, education), making safety features a potential competitive differentiator.
Authors' market‑structure reasoning based on synthesized literature and economic theory; no empirical adoption or revenue data provided to validate this claim.
Harmonized international norms and transparency measures would reduce transaction costs, limit market fragmentation, and lower the likelihood of destabilizing arms‑race dynamics, thereby improving the environment for cross‑border investment and trade in AI.
Authors' normative/economic argumentation based on comparative findings; proposed as a policy implication rather than an empirically validated result.
Aligning domestic rules with international risk‑mitigation norms, increasing transparency in defence procurement/AI operations, and strengthening multilateral confidence measures would reduce escalation and abuse.
Authors' policy argumentation and normative reasoning based on comparative findings (not empirically tested in the paper).
Better consent mechanisms (granular, transferable, delegable) can change the marginal value and liquidity of personal data—enabling new pricing/contracting models (subscriptions, pay-for-privacy, data dividends).
Normative and conceptual claim from the workshop's economics discussion and design provocations; not empirically evaluated within the workshop summary.
We need to move beyond explicit, one-time decisions to broader ways users can influence data use (e.g., delegation, preferences over inference/usage).
Workshop recommendation emerging from co-design exercises, futures scenarios, and position papers; presented as a normative/design agenda rather than an empirically tested intervention.
THETA outputs can be used to create domain-tailored textual covariates (e.g., narrative indices, topic intensity) for regressions or forecasting, provided researchers validate outputs with human coding and sensitivity checks.
Practical recommendation and implication for economists in the discussion; not an empirical claim directly tested in the reported experiments.
THETA can surface domain-specific frames, stakeholder positions, and emergent arguments from large comment corpora or filings, assisting policy and regulatory analysis.
Stated implication and example applications (regulatory comment corpora, filings); no direct case-study results or downstream policy-analytic validations included in the summary.
THETA's DAFT plus the agent workflow reduces the marginal cost of coding and classification, making large-N qualitative analysis more feasible.
Argued implication based on use of parameter-efficient LoRA and human-in-the-loop agent design; no cost analyses, time studies, or economic comparisons provided in the summary.
Clinic-aware designs and reliable validation can enable clearer evidence of value, facilitating payer reimbursement, value-based care contracts, and new pricing models for AI-enabled medical devices and services.
Policy and reimbursement implications discussed by clinicians and industry participants during the workshop and summarized in the workshop report (NSF workshop, Sept 26–27, 2024).
Scalable validation ecosystems and continuous objective measures reduce information asymmetries between developers, clinicians, and payers, lowering commercialization and regulatory risk, which raises private returns and speeds adoption.
Economic implications and causal argument set out in the workshop summary based on expert judgement and theory discussed at the NSF workshop (Sept 26–27, 2024).
Using CFR avoids the computational and development costs of retraining T2I models to improve color fidelity, providing a lower-cost path to better color authenticity.
Paper emphasizes CFR is training-free and applies at inference, claiming improved color authenticity without model retraining; cost implication is inferred from lack of retraining (quantitative compute savings not provided in the summary).
Once trained, these simulation-trained summary networks are fast to evaluate and can be used as amortized estimators to enable large-scale counterfactuals, sensitivity analyses, and Monte Carlo-based policy evaluation with much lower per-evaluation cost.
Practical implication claim: based on amortization principle (neural network inference is fast at evaluation time) and reported ability to replace repeated runs of iterative algorithms; the summary asserts reduced per-evaluation cost but does not provide quantitative runtime benchmarks or speedup ratios in the provided text.
Surrogate-accelerated workflows reduce energy consumption and carbon footprint per discovery because they require fewer expensive evaluations.
Stated implication in the paper linking fewer expensive quantum-chemistry/DFT evaluations to lower energy use; no measured energy/emissions data provided in the summary.
Order-of-magnitude reductions in expensive evaluations enable faster R&D cycles and higher throughput for exploration of potential-energy landscapes in materials science, catalysis, and drug design.
Policy/economic implication argued in the paper based on empirical reductions in expensive evaluations; no direct time-to-discovery experiments reported in the summary.
QCSC capabilities could change the economics of certain AI model classes that rely on expensive scientific simulations for training data by producing richer, cheaper training datasets.
Theoretical link between simulation output quality/cost and training-data generation for physics-informed ML and generative chemistry models; no empirical studies or cost estimates presented.
QCSC-enabled faster, higher-fidelity simulation can compress R&D cycles in chemistry and materials, lowering time-to-discovery and increasing returns to computational investment for firms.
Use-case analysis linking simulation fidelity/turnaround to R&D timelines; relies on assumed speedups and fidelity improvements but provides no measured speedup data.
Adopting DPS-like efficiencies reduces the marginal compute cost of online prompt-selection workflows (dominated by rollouts), thereby shortening finetuning cycles and increasing developer productivity.
Paper's implications section: logical inference from reported reduction in rollouts and rollout compute; not an empirical market study—no dollar or industry-scale numbers provided.
Embedding AI produces operational gains: automation of routine tasks, fewer errors, faster decision cycles, and continuous model learning/refinement.
Operational claim articulated conceptually with suggested evaluation metrics (forecast accuracy, latency, false positive/negative rates); the paper does not present empirical measurement, sample sizes, or deployment results.
Platforms combining high-volume generation with effective filtering/curation can create strong network effects and concentration in markets for AI-assisted ideation.
Market-structure reasoning and illustrative platform examples from the literature; no empirical market-wide causal studies reported in the review.
Firms that embed AI into collaborative workflows and invest in human curation may capture disproportionate returns (first-mover and scale advantages).
Theoretical/strategic argument supported by some applied case evidence and platform-market reasoning cited in the synthesis; the review notes absence of systematic causal firm-level evidence.
Generative AI will create complementarity: increasing returns to skills in evaluation, curation, synthesis, and domain expertise that integrate AI outputs.
Theoretical labor-economics reasoning supported by case studies and task-level studies showing demand for evaluation/curation skills in AI-assisted workflows; direct causal evidence on wage effects is limited in the reviewed literature.
Lowered cost and time of ideation and early-stage R&D due to generative AI may accelerate innovation cycles and reduce firms' search costs.
Inference from studies reporting reduced time-to-prototype and increased ideation; this is an economic interpretation rather than directly measured long-run firm-level innovation rates in the reviewed studies.
Targeted subsidies or support for SMEs to access SECaaS could accelerate secure AI adoption where scale barriers exist.
Economic rationale and proposed field-experiment designs; no empirical trial results presented in the chapter.
Clarifying liability and the shared responsibility model will better align incentives between providers and customers and improve security outcomes.
Policy and legal analysis; case studies of incidents where unclear responsibilities hampered response; recommended as an intervention rather than proven by causal evidence.
Promoting interoperable standards and certification can reduce lock-in and lower search costs for buyers, fostering competition in SECaaS markets.
Policy recommendation grounded in market-design theory and analogies to other standardization efforts; supporting case studies from other technology markets suggested but not empirically established here.
Open, linked phenomic–genomic datasets could inform policy and conservation markets (e.g., biodiversity credits) by improving monitoring and trait-based risk assessment models.
Policy implication advanced in the discussion; presented as potential application rather than demonstrated outcome.
Paired phenome–genome data increases the scientific and commercial value of the dataset for models predicting phenotype from genotype and vice versa.
Analytical argument in the implications section; no empirical demonstrations in the paper of improved model performance using these pairings.
Open, standardized 3D phenomic datasets reduce the need for individual labs/companies to finance expensive scanning campaigns and democratize access for academic groups and startups.
Argument in the paper's implications section based on the public release of a large standardized dataset; not an empirically tested economic outcome in the study.
Demand would grow for liability insurance tailored to EdTech, third‑party audits, fairness certifications, and specialized legal advisory services; these markets would affect costs and differential competitiveness.
Predictive market analysis and policy reasoning (no survey or market data presented).
Stricter legal exposure may slow some risky experimentation but encourage investment in fairness testing, robust evaluation, and explainability tools — potentially increasing the quality and trustworthiness of deployed AI in education.
Normative economic argumentation about incentives for R&D and testing; no empirical measurement of innovation rates provided.
The method can identify frontier topics and cross-field convergence (e.g., methods migrating from NLP to vision) to inform assessments of comparative advantage and specialization across institutions/countries.
Proposed implication: using topic maps and cluster dynamics to detect frontier topics and cross-field migration; no concrete empirical examples or validation presented in summary beyond general mapping claim on ICML/ACL abstracts.
The approach is scalable and model-agnostic: different LLMs and embedding models can be swapped into the pipeline without changing the overall method.
Claimed design property in the paper summary (asserted ability to substitute different LLMs/embedding models). No detailed cross-model robustness experiments or scalability benchmarks provided in the summary.
AI should serve precision and purpose in public policy — improving foresight, enabling better trade-offs, and preserving democratic accountability.
Normative policy prescription and conceptual argumentation in the book; no empirical testing or quantified outcomes reported.
AI-driven systems should empower people with knowledge and pathways to participate in global markets rather than concentrate gains.
Normative recommendation derived from policy analysis and value judgments in the book; not supported by empirical evidence in the blurb.
Algorithmic transparency and auditability can reduce systemic risk from opaque automated lending decisions and improve regulator oversight and macroprudential policy.
Conceptual/systemic-risk argument in the "Systemic risk & governance externalities" section; no empirical systemic-risk analysis provided.
Improved algorithmic transparency could reduce information asymmetries, lowering adverse selection and moral hazard over time and potentially expanding credit to underserved populations.
Conceptual economic argument in the "Credit allocation & pricing" section; based on theory rather than empirical testing.
If properly designed and enforced, the protocol measures can improve credit access for underserved populations and reduce biased exclusion, supporting inclusive growth.
Normative claim supported by doctrinal arguments, comparative regulatory literature and technical fairness literature synthesized in the audit (no controlled empirical evaluation reported).
VIS can be integrated into macro/meso AI-economics models (input–output general equilibrium, growth models) to capture embodied labor and capital effects and to enable counterfactual analysis of AI diffusion scenarios.
Authors propose methodological extensions and modeling directions that embed VIS-style accounting into larger economic models for scenario analysis (conceptual suggestion).
VIS metrics can inform policy decisions (workforce retraining, sectoral subsidies, taxation) by revealing where AI-induced productivity changes will propagate through supply chains.
Authors argue policy relevance based on VIS’s ability to map upstream/downstream labor effects; presented as an implication rather than empirically validated policy outcomes.
VIS-based measures can improve measurement of AI’s productivity impacts by better capturing indirect labor displacement or augmentation from AI-driven automation across supply chains.
Conceptual extension: VIS framework captures indirect labor effects that would matter when assessing AI-driven automation impacts; not empirically tested for AI within the paper.
By synthesizing computer science, engineering, and financial policy insights, DRL should be viewed not merely as a mathematical tool but as a transformative agent within the global socio-technical infrastructure of capital markets.
High-level synthesis and interdisciplinary argumentation in the paper; no empirical evidence or longitudinal studies are cited in the excerpt to demonstrate systemic transformation.
Modular and cell‑free platforms could enable decentralized, localized manufacturing of specialty compounds, potentially altering trade flows away from centralized petrochemical hubs.
Conceptual synthesis plus small-scale demonstrations of modular/cell-free units in the reviewed literature; limited pilot projects and discussion of potential scalability and portability.
Lower data and compute requirements could decentralize innovation (reducing incumbent advantages tied to massive compute/data), but the complexity of embodied systems and real-world testing could create new specialized incumbents (robotics platforms, simulation providers).
Market-structure hypothesis based on trade-offs between resource needs and platform value; speculative and not empirically tested in the paper.
Proprietary, high-quality surrogate models could create competitive advantage and barriers to entry, whereas open-source surrogates would democratize access.
This is an implication/policy argument in the paper's discussion about IP and market effects; it is a theoretical/qualitative claim rather than an empirical result from the experiments.
Improved throughput and lower travel costs can induce additional travel demand (rebound), partially offsetting congestion/emissions gains unless paired with demand-management measures.
Theoretical economic reasoning presented in the paper as a caveat; not directly measured in the simulation experiments (no induced-demand dynamic experiments reported).
Pretraining on diverse temporal resolutions increases upfront costs (data acquisition, storage, compute) but can raise model generalization and reduce downstream retraining costs, improving ROI for platform providers.
Paper discusses trade-offs in AI economics, claiming broader pretraining raises costs but yields returns through better generalization and lower adaptation cost. This is a theoretical/cost–benefit argument rather than an empirical finding reported in the summary.
Organizational heterogeneity in strategic backing and mentoring explains variation in benefits from AI adoption across firms and sectors, contributing to cross-firm productivity dispersion.
Theoretical claim linking organizational moderators to heterogeneous adoption outcomes; proposed as an empirical research direction without data provided.
Managerial and peer mentoring styles (e.g., directive vs. developmental mentoring) influence how affordances are perceived and actualized, affecting learning, trust, and task allocation in human–AI collaboration.
Theoretical argument drawing on mentoring and organizational behavior literatures integrated with AST/AAT; no empirical tests or sample presented.
Large fixed costs to build standardized databases and automated laboratories imply economies of scale that can favor well-capitalized firms and centralized public infrastructures, potentially increasing barriers to entry.
Economic analysis and reasoning in the implications section drawing on the costs of data/infrastructure discussed in the reviewed literature; not empirically measured in the paper.