Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
System B (action-driven learning) should learn through intervention, consequences, and trial-and-error, using active exploration, reinforcement learning, and hierarchical/skill learning.
Architectural proposal aligning with RL and hierarchical learning literature; theoretical description without experimental evidence.
System A (observation-driven learning) should build models of others, social contingencies, and passive affordances through imitation, self-supervised representation learning, and inverse RL.
Architectural specification and mapping to existing algorithms (imitation, SSL, inverse RL); no empirical validation provided.
Integrating observation-driven and action-driven learning with meta-control and evolutionary/developmental priors should improve sample efficiency, robustness, transfer, and lifelong adaptation.
Conceptual argument and proposed integration of methods; suggested but untested experimentally in the paper.
A biologically inspired three-part architecture (System A: observation-driven learning; System B: action-driven learning; System M: internally generated meta-control) can address these limitations.
Theoretical proposal and analogy to biological systems; no empirical validation reported in the paper.
Barriers to entry may be larger for tacit‑capability‑driven systems than for rule‑based systems, potentially increasing market concentration.
Economic argument linking tacit capabilities to requirements for large data, compute, and specialized training dynamics; speculative and not empirically tested in the paper.
HindSight-style retrospective matching could underpin markets or contingent contracts for ideas by providing an objective payoff rule based on later publications and citations.
Paper's implications section proposing that retrospective matching can be used as an objective payoff rule for markets; this is a proposed application rather than an empirical finding.
Physically-plausible reconstructions reduce unsafe behaviors in deployed agents (e.g., collisions) and lower simulation-to-real failure modes.
Argument in paper tying reduced inter-object penetration and realistic contacts to fewer failures in simulation-to-real pipelines and safer agent behavior; not an empirical claim directly validated in real-world deployments within the provided summary.
Open release of a high-quality 3D dataset and pre-trained models will lower entry barriers and intensify competition in robotics, AR/VR, and 3D content markets.
Paper discussion posits that public benchmarks and models reduce dataset/compute barriers and enable broader research and product development. This is a policy/economic implication stated by the authors, not tested empirically in the paper.
Better monocular multi-object 3D reconstruction can lower perception costs for robots and embodied agents (fewer sensors, less calibration) and accelerate deployment in logistics, household service robots, inspection, and manipulation tasks.
Discussion/implications section in paper arguing that improved single-image multi-object reconstruction reduces reliance on extra sensors and calibration, with downstream benefits for robotic deployment. This is presented as implication/argument rather than empirical evidence in the paper summary.
The methodology enables modular chiplet economics by removing a key validation bottleneck, which could support modular upgrade paths and lower manufacturing cost via mixed-node IP blocks.
Authors propose this as an implication of improved integration and repeatability; argumentative claim without accompanying manufacturing-cost or economic-case studies in the summary.
Replay-driven validation can reduce engineering labor hours spent chasing non-deterministic bugs, lowering validation cost per project and decreasing risk of late-stage silicon respins.
Economic implication presented by authors: deterministic, repeatable debugging is argued to reduce manual effort and risk; no empirical labor-hour or cost-savings data provided in the demonstration.
Replay-driven validation is positioned as a scalable pre-silicon validation strategy for future chiplet-based heterogeneous systems.
Authors articulate scalability as a key positioning argument and present the methodology applied to a non-trivial CPU+multiple-GPU-core+NoC demonstrator; however, no large-scale or multi-project scalability study or quantitative scaling metrics are provided.
Surrogate-assisted inverse design reduces the marginal cost and time of exploring high-dimensional, discrete hardware design spaces by replacing costly EM simulations with fast ML inference, increasing R&D productivity and shortening design cycles.
Argument provided in implications: surrogate replaces EM simulations enabling faster iteration; no quantitative cost or time savings, or economic measurements, are presented in the summary.
There is a market opportunity for scalable 'control-as-a-service' offerings and curated urban traffic datasets enabled by this data-driven control approach.
Authors' market and policy discussion extrapolating from technical results to business models and data infrastructure value; conceptual reasoning rather than empirical market analysis.
Reductions in travel time and CO2 emissions translate into measurable economic benefits (lower fuel consumption, productivity gains, reduced pollution-related health costs).
Economic implications discussed qualitatively in the paper as extrapolation from measured reductions in travel time and emissions; no direct empirical economic quantification within the traffic simulation experiments.
Benchmarks and standards are needed for evaluating high-frequency time series performance to guide procurement and contracting decisions.
Paper recommends establishing standards and benchmarking protocols specifically for high-frequency time series, motivated by observed TSFM brittleness on millisecond data. This is a policy/research recommendation rather than an empirical result.
Improved short-term forecasting enabled by high-frequency data can translate into operational benefits such as better resource allocation (spectrum, scheduling), reduced service-level violations, and enablement of new latency-sensitive services.
Paper argues these application-level benefits as implications of better forecasting for telecom control; these are projected outcomes based on the relevance of the forecasting horizons to control tasks, not empirically demonstrated in the summary.
High-frequency datasets (like millisecond 5G traces) are economically valuable; firms that collect such domain-specific, high-resolution data can gain competitive advantages in low-latency applications.
Paper's implications for AI economics argue that access to high-frequency operational data improves model performance for latency-sensitive tasks and therefore has economic value. This is an economic argument grounded in the empirical observation of model brittleness but not supported by market-level empirical analysis in the summary.
Research and engineering efforts should develop architectures, multi-scale modeling, and fine-tuning protocols tailored to high-frequency time series.
Paper recommends these research directions based on benchmark limitations (poor TSFM performance on high-frequency data). This is a prescriptive claim (future research needed) rather than an empirical result.
Heterogeneous datasets and missing hardware evaluation create market opportunities for third parties supplying standardized datasets, verification suites, and end-to-end benchmarks (economically valuable public goods).
Market-structure inference based on observed heterogeneity in datasets and the Layer 3b gap across the surveyed systems; presented as an implication in the review.
Effective human–AI collaboration will shift task content toward complementary activities (supervision, interpretation, creative/problem-solving), increasing demand for these complementary skills and potentially raising skill premia for workers who actualize AI affordances.
Theoretical prediction grounded in complementarity arguments and affordance actualization; no empirical sample or quantification provided.
Productivity gains from AI depend not only on the technology's capabilities but on organizational adaptation and successful affordance actualization; therefore investments in supportive strategy and mentoring can increase the fraction of potential AI productivity realized.
Theoretical implication derived from integrating AST and AAT literatures; recommended for empirical testing but not empirically demonstrated in the paper.
Strategic innovation backing (organizational investments, resource allocation, governance, and incentives) enables experimentation and scaling of human–AI work and thereby increases realized returns to AI investments.
Theoretical proposition based on literature integration and normative argument; no empirical sample or original data presented.
DAOs can enable decentralized data and model marketplaces where participants sell/lease models, training data, or prediction services—AI models become tradable assets linked to IP tokens.
Conceptual proposal drawing on DAO/tokenization and AI model-marketplace literature; no empirical marketplace data presented in this paper.
In AI economics terms, tokenized funding plus distributed expertise could lower coordination costs and improve allocative efficiency of R&D capital, potentially reducing marginal cost per candidate explored when combined with AI-driven screening.
Conceptual economic argument and synthesis of theoretical mechanisms; no empirical calibration or modeling provided in the study.
Privacy-enhanced DAOs using federated learning, secure multiparty computation, and differential privacy can allow sharing of sensitive health data while preserving privacy (proposed but not empirically tested in this paper).
Conceptual exploration of privacy-preserving technical methods and their applicability to DAO contexts; no implementation or empirical evaluation presented.
Integrating AI for project triage, lead prioritization, and governance analytics is a promising future direction but the paper reports no original empirical testing of these integrations.
Conceptual proposals and theoretical integration discussion; no empirical trials or pilot studies reported in the paper.
Labor demand will shift toward interdisciplinary practitioners (materials scientists with ML skills and automation engineers), increasing returns to human capital at the ML–lab interface.
Workforce implication synthesized from technological trends described in the review; no labor-market data presented in the paper.
Calibrated uncertainties reduce the risk of costly failed experiments and misallocated capital; regulators and funders should incentivize confidence-aware AI in high-stakes materials domains.
Policy recommendation based on surveyed literature on calibration and practical costs of failed experiments; not supported by new empirical analysis in the paper.
Investments that prioritize uncertainty quantification, interpretability, and integration with experimental capacity yield higher economic returns than marginal improvements in predictive accuracy alone.
Argument synthesizing technical bottlenecks and economic implications from reviewed studies; recommendation rather than an empirically tested result within this paper.
Open standardized datasets and shared robotic infrastructure (public or consortium models) can lower barriers to entry and spur broader innovation in materials discovery.
Policy and economic arguments in the review supported by literature on public goods and shared research infrastructure; no new empirical evidence provided here.
Curated, standardized multimodal materials datasets (including computational and experimental measurements and synthesis metadata) are high-value assets that will generate platform effects and first-mover advantages for organizations that build them.
Economic and strategic reasoning synthesizing the implications of data value from reviewed materials-AI literature; no original economic data presented.
Bayesian learning, ensemble methods and calibration techniques (e.g., temperature scaling, conformal prediction) can provide better-calibrated uncertainty estimates for deep models in materials applications.
Surveyed uncertainty-quantification literature and methodological demonstrations in the materials/ML literature; no new empirical calibration studies presented in the review.
Economic assessments of ecological AI should go beyond model accuracy to measure conservation outcomes, cost‑effectiveness, and policy impact; new metrics and impact evaluation methods are important for funding decisions.
Evaluation-and-measurement recommendation in the paper based on limitations of benchmark-focused evaluation observed in the collection (methodological recommendation).
There is an evolution from task‑specific automation toward systems that incorporate ecological domain knowledge, robustness to ecological heterogeneity, and evaluation on applied conservation objectives.
Evolution-of-approach observation based on trends reported across the papers in the collection (comparative description of earlier vs newer works).
AI-adopting firms exhibit higher productivity and higher market value after adoption.
Estimates showing increases in productivity (e.g., TFP measures) and market-value measures (e.g., market capitalization or Tobin's Q) for adopters relative to nonadopters using the stacked diff-in-diff design.
Post-adoption patents include more claims (i.e., are broader/more detailed) for AI-adopting firms.
Patent-level analysis using number of claims per patent as outcome in the stacked diff-in-diff framework.
Peer-driven digitalization matters not only for firm-level resilience but also for long-term sustainable competitiveness in manufacturing ecosystems.
Synthesis and implication drawn from empirical results (peer effects, mediators, and heterogeneity) using Chinese manufacturing A-share firm data from 2013–2022.
The adoption of AI technologies offers a scalable, resilient strategy for modernizing water management and promoting agricultural sustainability in Iraq.
Authors' conclusion based on single-site field experiments, economic and sustainability analyses, and reported robustness in sensitivity analyses; scalability claim is inferential and extends beyond the experimental site.
Information Systems (IS) research is critical for achieving joint optimization of technical capabilities and social systems in the context of GenAI.
Authors' argumentative positioning based on the socio-technical interpretation of the review; proposed role for IS scholarship rather than empirical test within the review.
Policy tools such as bans on sale of certain sensitive data, fiduciary duties for data holders, privacy-by-default, and collective data governance (data trusts, regulated commons) are appropriate levers to limit harms from data commodification.
Prescriptive policy argument based on normative analysis and literature on governance alternatives; recommendations are not evaluated using empirical policy impact studies within the paper.
Policy-relevant implication (extrapolated): diffusion of AI tools among small firms will likely follow social-network channels and be shaped by peer benchmarking, so aggregate incentives may underperform unless they leverage local networks and trusted intermediaries.
Inference and policy implication drawn from main empirical findings on the primacy of social networks and peer effects for entrepreneurial behavior; not directly measured in the dataset for AI-specific adoption.
China exhibits strong long-run integration between core AI and AI-enhanced robotics and a significant contribution from universities and the public sector to patenting.
Country-level decomposition showing (a) a stronger statistical long-run relationship between Chinese core AI and AI-enhanced robotics patent series and (b) actor-type decomposition of Chinese patent filings indicating relatively high shares from universities/public-sector actors (patents 1980–2019). Exact counts/shares not provided in the summary.
Policymakers should combine competition policy, data governance, retraining/redistribution measures, and targeted R&D/green-AI incentives to manage the transition and preserve broad-based demand.
Normative policy recommendation derived from the integrated theoretical framework and literature synthesis; not empirically validated in the paper.
Economically, there will be demand for 'temporal-quality' products: neurotech and AI services that explicitly measure, preserve, or enhance experienced temporality (presence, flow, meaning), representing a distinct market segment.
Speculative market implication derived from conceptual argument and literature on consumer preferences; no market data or empirical demand studies provided.
Industrial automation (industrial robots) can be an effective component of green development strategies when paired with finance and policy instruments.
Inference drawn from core empirical results: (1) IR reduces IWE; (2) effects are stronger with greater financial depth and policy support; combined evidence suggests complementarity between automation, finance, and policy.
Regulators must balance innovation with consumer protection by mandating model auditability, fairness testing, and interoperable data standards to prevent systemic and algorithmic risks.
Policy recommendation derived from synthesis of algorithmic risk, model opacity, and fintech market dynamics; based on normative analysis and best‑practice proposals rather than empirical testing.
Policymakers and firms should prioritize upskilling, standards for model provenance and IP, liability frameworks for AI-generated code, and improved measurement to track AI-driven productivity changes.
Policy recommendations derived from identified risks, barriers, and implications in the literature review and practitioner survey; not an empirically tested intervention.
DPS gives organizations with limited compute budgets a cost advantage for RL finetuning, potentially democratizing access to effective finetuning or shifting demand across cloud compute products.
Economic implications discussed qualitatively by the authors based on reduced rollout requirements; this is a projection rather than an experimental result.
AI-enabled analytics can increase firm-level decision value and productivity—improving capital allocation, speeding risk mitigation, and raising profitability in affected firms and sectors.
Economic implication argued by the paper using theoretical reasoning; no firm-level empirical estimates, sample sizes, or causal identification strategies are reported (paper suggests methods like A/B tests or causal inference for future study).