Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
RS modules (user model, ranking engine, evaluator) can be modular and plug-and-play in existing robot architectures, augmenting LLMs and RL modules.
Design proposal mapping RS components to robot pipeline stages; no integration experiments reported.
Interpretability, fairness, and privacy-preserving methods (e.g., explainable recommendations, differential privacy, fairness-aware algorithms) are applicable and important for social-robot personalization.
Survey of algorithmic approaches in RS and privacy/fairness literature; conceptual recommendation without empirical application in robots.
Optimizing for diversity, novelty, and serendipity in recommendations can help avoid echo chambers and repetitive interactions with social robots.
Argument based on RS objectives and prior RS findings about diversity/serendipity; no robot-specific empirical evidence provided.
Multi-objective and constrained optimization techniques from RS can be used to balance engagement, well-being, fairness, privacy, and safety in social-robot behavior selection.
Conceptual proposal referencing multi-objective/constrained recommendation literature; no empirical tests within robots included.
Latent-factor models, embeddings, and hierarchical user models from RS can be used to capture long- and short-term preferences in social robots' user models.
Methodological proposal drawing on RS modeling techniques; no experimental validation in robotic systems provided.
Integrating recommender-system techniques across the robot pipeline (user modeling, ranking, contextualization, evaluation) can capture long-term, short-term, and fine-grained user preferences and enable proactive, ethically constrained action selection.
Conceptual framework and design proposal synthesizing recommender-systems (RS) and human–robot interaction (HRI) literature; no novel empirical experiments or sample size reported.
The main empirical findings are robust to alternative model specifications and checks.
Paper reports robustness checks (alternative control sets, specifications, and sensitivity analyses) in which the negative IR–IWE relationship remains qualitatively unchanged.
Recommendation: support capacity building—digital literacy, agronomic knowledge, and extension systems—to increase adoption and equitable benefits.
Authors' recommendation derived from recurring findings on human-capacity constraints in the reviewed studies.
AI interventions supported economic transformation in some contexts by improving market access and enabling reallocation toward higher-value tasks.
Findings from selected studies and institutional reports documenting improved market linkages, price discovery, and shifts in farm household activities.
AI applications contributed to environmental resilience via water and fertiliser savings and earlier pest detection in some studies.
Reported resource-use metrics and earlier detection outcomes in several reviewed studies and case reports synthesized thematically.
AI-enabled interventions produced technical efficiency gains through better input targeting and reduced waste.
Studies in the review reporting improvements in input targeting (e.g., fertiliser/pesticide application) and reductions in waste; aggregated in thematic synthesis.
AI deployment has produced measurable supply-chain efficiency improvements and better market integration in reviewed cases.
Synthesis of studies and institutional reports reporting metrics/qualitative evidence on logistics, aggregation, price discovery, and market linkages.
AI interventions are associated with input cost reductions up to ~25%.
Comparative effect-size synthesis across reviewed studies reporting input cost outcomes (2020–2025).
Across reviewed studies (2020–2025), AI interventions are associated with yield gains of roughly 12–45%.
Comparative effect-size synthesis of reported impacts across the reviewed studies (>60 articles/reports) that reported yield outcomes.
AI-powered digital agriculture in developing contexts—especially Sub-Saharan Africa—can materially improve productivity, sustainability, and rural livelihoods.
Structured literature review and thematic synthesis of >60 peer-reviewed articles and institutional reports (timeframe 2020–2025) focused primarily on Sub-Saharan Africa and other developing contexts.
Standards and open interoperability reduce vendor lock‑in and transaction costs, widening market access and competition for AI services built on DT data.
Economic reasoning and thematic findings from the literature linking interoperability to reduced transaction costs and broader market participation.
Public procurement and large asset owners can act as demand‑pulls to de‑risk early investment and help set standards for DT adoption.
Policy recommendation and examples from literature arguing that large buyers can catalyse adoption; based on case/policy studies in the review.
Better data continuity across lifecycle phases reduces model training friction and increases the value of historical data for forecasting and causal analysis.
Conceptual argument supported by case evidence in the review showing fragmented data reduces reusability; authors infer benefits for AI training and forecasting.
DTs generate continuous, high‑resolution operational data (IoT telemetry, usage patterns, maintenance logs) that can substantially improve AI models for predictive maintenance, scheduling, energy optimisation, and logistics.
Logical implication and examples from pilot studies in the review showing richer telemetry and operational datasets produced by DT pilots; argued benefits for AI model inputs.
Three core differences by which DTs extend BIM: (1) bidirectional automated physical↔digital data exchange; (2) integration of heterogeneous, real‑time sources (IoT, operational systems); (3) lifecycle continuity preserving data across handovers.
Conceptual synthesis across the literature reviewed (conceptual papers, case studies, pilots) identifying functional distinctions between DT and BIM.
Digital twin (DT) technology can materially improve construction lifecycle performance beyond what Building Information Modelling (BIM) delivers.
Synthesis of 160 reviewed studies including conceptual papers, case studies and pilot deployments reporting performance improvements attributed to DT implementations.
AI/ML–based credit scoring and alternative‑data underwriting reduce information asymmetries, lowering search and monitoring costs and expanding effective credit supply to previously rejected MSMEs and startups.
Analytical argument supported by illustrative case examples and literature on machine‑learning underwriting; the paper notes limited causal identification and time‑sensitivity of fintech products.
Government action (digital ID, payments rails, credit guarantees, standards, consumer protection) is vital to enable beneficial outcomes from digital finance for MSMEs.
Policy synthesis and comparative evaluation recommending government infrastructure and regulatory measures; conclusion based on institutional analysis rather than experimental evidence.
Case studies indicate FinTech platforms have meaningfully lowered rejection rates and loan turnaround times for underbanked MSMEs, accelerating working‑capital access.
Illustrative case studies of FinTech deployments in India reporting lower rejection rates and faster approvals; paper explicitly notes these cases are illustrative and not nationally representative and do not establish causal identification.
Supply‑chain financing can meaningfully unlock working capital for MSMEs by leveraging buyer creditworthiness, yielding high impact for MSMEs embedded in modern supply chains.
Comparative evaluation and illustrative case studies highlighting supply‑chain finance deployments; evidence is demonstrative and not nationally representative or causally identified.
Optimal financing outcomes generally come from hybrid approaches that combine formal banking credibility and policy support with FinTech speed and data-driven underwriting.
Comparative evaluation and policy synthesis recommending co‑lending, credit guarantees, and partnerships (banks as liquidity providers combined with FinTech underwriting); based on qualitative tradeoff analysis rather than experimental/causal evidence.
Compared with traditional bank loans and government schemes, contemporary financing models tend to be faster, more flexible, and more scalable for smaller firms.
Comparative qualitative evaluation across five variables and illustrative case studies showing reduced loan turnaround times and improved accessibility for small firms; no nationally representative sample or causal inference provided.
Digital technologies — especially FinTech lending platforms, alternative debt/equity products, supply‑chain finance, crowdfunding, and emerging blockchain applications — are materially expanding timely access to capital for Indian MSMEs and startups.
Multi‑criteria comparative evaluation (accessibility, finance cost, flexibility, risk, scalability) plus illustrative case studies of FinTech and alternative financing deployments in India that report faster turnaround and inclusion effects. The paper notes case evidence is illustrative rather than nationally representative and lacks quantitative causal identification.
Proprietary experimental datasets and curated metagenomic sequences become valuable intellectual assets that can differentiate commercial offerings.
Paper lists 'Data as an economic asset' and highlights the value of proprietary datasets and curated metagenomes; no market valuation data are included.
Faster, cheaper access to structural hypotheses can shorten drug and enzyme discovery cycles, raising R&D productivity and lowering marginal costs of early‑stage screening.
Paper argues this as an implication under 'Productivity and R&D acceleration'; it is presented as an economic consequence rather than demonstrated with empirical cost‑or time‑saving data in the text.
Practical applications are already emerging, including accelerating target structure availability for small‑molecule and biologics design, guiding enzyme redesign, and interpreting disease mutations.
Paper lists these application areas as emerging uses of AI‑predicted structures; evidence is presented as examples and implications rather than empirical case studies within the text.
Template‑and‑MSA informed architectures (e.g., RoseTTAFold and AlphaFold family) deliver near‑experimental accuracy for many proteins.
Paper names these architectures and links their inputs (MSAs, templates) to high accuracy against experimental structures (PDB); specific evaluation datasets, protein counts, or error metrics are not enumerated in the text.
Modern AI systems (e.g., AlphaFold variants, RoseTTAFold, single‑sequence models like ESMFold) can approach or reach near‑experimental accuracy while greatly increasing speed and scalability.
Paper cites specific models (AlphaFold family, RoseTTAFold, ESMFold) and describes benchmarking against structural ground truth (PDB / curated experimental structures) and large‑scale pretraining; exact benchmark values or sample sizes are not specified in the text.
New economic metrics are needed for VR (value of behavioral data streams, cost per reduction in harm, ROI on security investments, welfare metrics capturing trust and adoption).
Authors' recommendations based on identified gaps in the literature and the comparative review of 31 studies; proposed as agenda items rather than empirically developed metrics.
VR generates high‑value behavioral and biometric datasets for AI personalization, training, and analytics; firms that extract this data can gain competitive advantages, creating incentives to centralize collection unless counteracted by policy or market forces.
Economic implications inferred by the authors from the literature synthesis and standard industrial‑organization logic; not supported by original empirical market data in the paper.
There is a need for regulatory standards, industry best practices, and ethics‑by‑design approaches; interoperable policy frameworks are recommended to govern VR security and privacy.
Policy and governance recommendations synthesized from multiple reviewed studies and the authors' integration; presented as prescriptive guidance rather than empirically tested interventions.
An effective defense mix for VR combines technical controls (secure boot, attestation, encrypted communications), AI tools for anomaly detection and policy enforcement, and human‑centered design (transparency, consent, usable controls).
Cross‑study synthesis showing these categories recur as recommended controls in the 31 reviewed papers; authors propose combining them in TVR‑Sec. No deployment or performance metrics provided.
Socio‑Behavioral Safety measures (moderation, design constraints, psycho‑social safeguards) are necessary to prevent harassment, persuasion, addictive interfaces, and other psychological harms in shared virtual spaces.
Qualitative synthesis of social‑behavioral harms and proposed mitigations reported across the literature review (31 studies); comparative evaluation of socio‑technical controls.
User Privacy in VR requires managing highly sensitive behavioral and biometric traces with privacy‑preserving ML approaches (e.g., federated learning, differential privacy), consent mechanisms, and data minimization.
Repeated recommendations across the reviewed studies; authors synthesized privacy‑preserving technical approaches and governance mechanisms from the 31‑study corpus. No primary experiments demonstrating efficacy provided.
System Integrity defenses should cover hardware, firmware, sensors, and networks to protect against spoofing, device tampering, malware, and supply‑chain attacks.
Aggregated technical recommendations from the literature corpus (31 studies) and the authors' mapping of integrity threats to controls (secure boot, attestation, encrypted communications). No empirical testing of these controls in the paper.
The Three‑Layer VR Security Framework (TVR‑Sec) integrates System Integrity, User Privacy, and Socio‑Behavioral Safety into an adaptive, multidimensional defense architecture for VR systems.
Conceptual synthesis developed by the authors from a comparative literature review of 31 peer‑reviewed studies (2023–2025); framework created by mapping identified vulnerabilities to technical, AI, and human‑centered controls. No empirical validation or deployment testing reported.
A coordinated Omnibus that clarifies interactions with the DSA and establishes consistent AI-focused enforcement capacity can reduce regulatory frictions, lower compliance costs, and better align incentives for responsible AI deployment.
Policy recommendation based on comparative mapping and scenario analysis; qualitative argumentation rather than empirical testing.
The iterative, human-in-the-loop agent workflow enables evaluation and refinement of algorithmic clusters into logically consistent, theory-ready categories.
Described iterative loop where agents evaluate clusters, align semantics, and refine outputs; qualitative assessments reported though no formal user-study metrics included in summary.
DAFT via LoRA reshapes semantic vector geometry to highlight domain-relevant distinctions without full model retraining.
Methodological claim: LoRA fine-tuning applied to foundation embeddings to adjust vector space; no geometric analyses or quantitative illustrations provided in the summary.
Across six domains THETA outperforms LDA, ETM, and CTM on measures of coherence and domain interpretability.
Reported comparative experiments across six domains using coherence metrics and qualitative/human interpretability assessments against LDA, ETM, CTM. Summary does not provide effect sizes, statistical tests, or per-domain breakdowns.
THETA substantially improves the interpretability and domain-specific coherence of topic/cluster outputs on very large social-text corpora.
Reported experiments comparing THETA to traditional topic models (LDA, ETM, CTM) across six domains; evaluation reportedly used topic coherence metrics and human-in-the-loop interpretability assessments/qualitative comparisons (no numeric results provided in summary).
Lowering fixed costs via shared resources can enable more entrants and niche innovators (e.g., specialized clinical apps).
Workshop economic implications and participant assertions in breakout sessions and plenary at the NSF workshop (Sept 26–27, 2024).
Public investment in shared data and compute as nonrival public goods will reduce duplication, lower entry barriers, and increase total R&D productivity.
Workshop implications for AI economics articulated by participants and authors as a policy recommendation; rationale stated in the summary document (NSF workshop, Sept 26–27, 2024).
De-risk pathways from lab to clinic via reproducible benchmarks, continuous monitoring, and cross-sector collaborations (academia, industry, clinicians, regulators).
Workshop translation-focused recommendations and roadmap produced by consensus at the NSF workshop (Sept 26–27, 2024).
Enable safe, accountable, and resilient platforms (including virtual–physical healthcare ecosystems) to reduce translational risk.
Workshop recommendations addressing safety, resilience, and virtual–physical ecosystems from cross-disciplinary discussion at NSF workshop (Sept 26–27, 2024).