The Commonplace
Home Dashboard Papers Evidence Syntheses Digests 🎲

Evidence (7395 claims)

Adoption
7395 claims
Productivity
6507 claims
Governance
5921 claims
Human-AI Collaboration
5192 claims
Org Design
3497 claims
Innovation
3492 claims
Labor Markets
3231 claims
Skills & Training
2608 claims
Inequality
1842 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 609 159 77 738 1617
Governance & Regulation 671 334 160 99 1285
Organizational Efficiency 626 147 105 70 955
Technology Adoption Rate 502 176 98 78 861
Research Productivity 349 109 48 322 838
Output Quality 391 121 45 40 597
Firm Productivity 385 46 85 17 539
Decision Quality 277 145 63 34 526
AI Safety & Ethics 189 244 59 30 526
Market Structure 152 154 109 20 440
Task Allocation 158 50 56 26 295
Innovation Output 178 23 38 17 257
Skill Acquisition 137 52 50 13 252
Fiscal & Macroeconomic 120 64 38 23 252
Employment Level 93 46 96 12 249
Firm Revenue 130 43 26 3 202
Consumer Welfare 99 51 40 11 201
Inequality Measures 36 106 40 6 188
Task Completion Time 134 18 6 5 163
Worker Satisfaction 79 54 16 11 160
Error Rate 64 79 8 1 152
Regulatory Compliance 69 66 14 3 152
Training Effectiveness 82 16 13 18 131
Wages & Compensation 70 25 22 6 123
Team Performance 74 16 21 9 121
Automation Exposure 41 48 19 9 120
Job Displacement 11 71 16 1 99
Developer Productivity 71 14 9 3 98
Hiring & Recruitment 49 7 8 3 67
Social Protection 26 14 8 2 50
Creative Output 26 14 6 2 49
Skill Obsolescence 5 37 5 1 48
Labor Share of Income 12 13 12 37
Worker Turnover 11 12 3 26
Industry 1 1
Clear
Adoption Remove filter
Digital technologies — especially FinTech lending platforms, alternative debt/equity products, supply‑chain finance, crowdfunding, and emerging blockchain applications — are materially expanding timely access to capital for Indian MSMEs and startups.
Multi‑criteria comparative evaluation (accessibility, finance cost, flexibility, risk, scalability) plus illustrative case studies of FinTech and alternative financing deployments in India that report faster turnaround and inclusion effects. The paper notes case evidence is illustrative rather than nationally representative and lacks quantitative causal identification.
medium positive Traditional vs. contemporary financing models for MSMEs and ... timely access to capital (availability and speed of financing for MSMEs/startups...
Regulating algorithmic transparency, data practices, and truthful sustainability claims is important to preserve digital trust and efficient market outcomes.
Policy recommendations and economic reasoning presented in the paper; grounded in literature on algorithmic governance and consumer trust; not empirically validated within the paper.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... digital trust; market efficiency; regulatory compliance
Network effects from social proof (reviews, UGC) can create winner-takes-most dynamics, advantaging destinations with stronger digital signals and creating visibility frictions for small/emerging destinations.
Theoretical argument drawing on platform/network effects literature and applied to tourism/social proof; paper cites social-proof constructs and suggests measurement via platform data.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... visibility; market concentration; destination attractiveness
Proprietary experimental datasets and curated metagenomic sequences become valuable intellectual assets that can differentiate commercial offerings.
Paper lists 'Data as an economic asset' and highlights the value of proprietary datasets and curated metagenomes; no market valuation data are included.
medium positive Protein structure prediction powered by artificial intellige... commercial value attributed to proprietary sequence/structure datasets and their...
Faster, cheaper access to structural hypotheses can shorten drug and enzyme discovery cycles, raising R&D productivity and lowering marginal costs of early‑stage screening.
Paper argues this as an implication under 'Productivity and R&D acceleration'; it is presented as an economic consequence rather than demonstrated with empirical cost‑or time‑saving data in the text.
medium positive Protein structure prediction powered by artificial intellige... duration and cost of early‑stage drug/enzyme discovery cycles and marginal cost ...
Practical applications are already emerging, including accelerating target structure availability for small‑molecule and biologics design, guiding enzyme redesign, and interpreting disease mutations.
Paper lists these application areas as emerging uses of AI‑predicted structures; evidence is presented as examples and implications rather than empirical case studies within the text.
medium positive Protein structure prediction powered by artificial intellige... availability of structural hypotheses for drug/biology design, utility in enzyme...
Template‑and‑MSA informed architectures (e.g., RoseTTAFold and AlphaFold family) deliver near‑experimental accuracy for many proteins.
Paper names these architectures and links their inputs (MSAs, templates) to high accuracy against experimental structures (PDB); specific evaluation datasets, protein counts, or error metrics are not enumerated in the text.
medium positive Protein structure prediction powered by artificial intellige... fraction of proteins for which prediction accuracy is near experimental (structu...
Modern AI systems (e.g., AlphaFold variants, RoseTTAFold, single‑sequence models like ESMFold) can approach or reach near‑experimental accuracy while greatly increasing speed and scalability.
Paper cites specific models (AlphaFold family, RoseTTAFold, ESMFold) and describes benchmarking against structural ground truth (PDB / curated experimental structures) and large‑scale pretraining; exact benchmark values or sample sizes are not specified in the text.
medium positive Protein structure prediction powered by artificial intellige... structure prediction accuracy (compared to experimental structures) and inferenc...
New economic metrics are needed for VR (value of behavioral data streams, cost per reduction in harm, ROI on security investments, welfare metrics capturing trust and adoption).
Authors' recommendations based on identified gaps in the literature and the comparative review of 31 studies; proposed as agenda items rather than empirically developed metrics.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... availability and use of new economic metrics for VR security and privacy (recomm...
VR generates high‑value behavioral and biometric datasets for AI personalization, training, and analytics; firms that extract this data can gain competitive advantages, creating incentives to centralize collection unless counteracted by policy or market forces.
Economic implications inferred by the authors from the literature synthesis and standard industrial‑organization logic; not supported by original empirical market data in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incentives for data centralization and resulting competitive advantage (conceptu...
There is a need for regulatory standards, industry best practices, and ethics‑by‑design approaches; interoperable policy frameworks are recommended to govern VR security and privacy.
Policy and governance recommendations synthesized from multiple reviewed studies and the authors' integration; presented as prescriptive guidance rather than empirically tested interventions.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... adoption of regulatory/standards frameworks and their expected effect on privacy...
An effective defense mix for VR combines technical controls (secure boot, attestation, encrypted communications), AI tools for anomaly detection and policy enforcement, and human‑centered design (transparency, consent, usable controls).
Cross‑study synthesis showing these categories recur as recommended controls in the 31 reviewed papers; authors propose combining them in TVR‑Sec. No deployment or performance metrics provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... overall defense effectiveness from combined controls (theoretical/proposed)
Socio‑Behavioral Safety measures (moderation, design constraints, psycho‑social safeguards) are necessary to prevent harassment, persuasion, addictive interfaces, and other psychological harms in shared virtual spaces.
Qualitative synthesis of social‑behavioral harms and proposed mitigations reported across the literature review (31 studies); comparative evaluation of socio‑technical controls.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incidence or severity of harassment/manipulation/psychological harms (identified...
User Privacy in VR requires managing highly sensitive behavioral and biometric traces with privacy‑preserving ML approaches (e.g., federated learning, differential privacy), consent mechanisms, and data minimization.
Repeated recommendations across the reviewed studies; authors synthesized privacy‑preserving technical approaches and governance mechanisms from the 31‑study corpus. No primary experiments demonstrating efficacy provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... reduction in privacy risk for behavioral/biometric data (proposed, not empirical...
System Integrity defenses should cover hardware, firmware, sensors, and networks to protect against spoofing, device tampering, malware, and supply‑chain attacks.
Aggregated technical recommendations from the literature corpus (31 studies) and the authors' mapping of integrity threats to controls (secure boot, attestation, encrypted communications). No empirical testing of these controls in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... coverage of integrity‑related threat mitigation (conceptual)
The Three‑Layer VR Security Framework (TVR‑Sec) integrates System Integrity, User Privacy, and Socio‑Behavioral Safety into an adaptive, multidimensional defense architecture for VR systems.
Conceptual synthesis developed by the authors from a comparative literature review of 31 peer‑reviewed studies (2023–2025); framework created by mapping identified vulnerabilities to technical, AI, and human‑centered controls. No empirical validation or deployment testing reported.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... proposed comprehensiveness/coverage of VR security defenses (conceptual architec...
Policy and platform design choices (e.g., provenance metadata, detection/disclosure of AI-generated content, monetization rule alignment) can reinforce or mitigate harms from GenAI-driven creator economies.
Policy recommendations and implications drawn from the qualitative findings across the 377-video sample and normative reasoning; not empirically tested.
medium positive Monetizing Generative AI: YouTubers' Collective Knowledge on... potential mitigation or amplification of harms via platform and policy intervent...
Participants systematically over-bid for privacy-disclosure labels: they were willing to pay more for a privacy-disclosure label than its objective value.
In the experiment (N = 610) participants submitted willingness-to-pay bids for privacy-disclosure labels; observed bids exceeded the objective/reference value, indicating overpayment for transparency information.
medium positive The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk i... Willingness-to-pay / bidding amounts for privacy-disclosure labels
Policy interventions that raise the reinstatement rate — for example, compensation/transfers to translate AI gains into broad-based purchasing power, faster/stronger fiscal support or automatic stabilizers — can prevent the explosive feedback and stabilize demand.
Model experiments and sensitivity analysis showing that increasing the reinstatement elasticity or direct transfers moves the system from explosive to convergent parameter regions in the calibrated phase-space.
medium positive Abundant Intelligence and Deficient Demand: A Macro-Financia... reinstatement rate, aggregate demand, avoidance of explosive crisis (regime outc...
FutureBoosting generalizes across multiple real-world electricity markets and forecast horizons.
Empirical results reported across 'multiple real-world electricity markets' and several forecasting horizons to capture diverse volatility and regime behavior (details on exact markets/horizons are reported in the experiments section of the paper).
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... MAE (and other error metrics) across different market datasets and horizons
The approach preserves the interpretability of downstream regression models while injecting temporal context.
Use of interpretable regression models (e.g., gradient-boosted decision trees) and XAI analyses (SHAP/feature importance) reported in the paper demonstrating interpretability of feature contributions.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Model interpretability (qualitative; feature-level explanations via XAI)
Freezing the TSFM (no joint fine-tuning) makes the framework lightweight and plug-and-play, lowering computational cost relative to joint training.
Architectural design: two-stage pipeline with a frozen TSFM used only to generate forecasted features; paper asserts ability to leverage pretrained TSFMs without end-to-end retraining. (No detailed compute-cost benchmarks given in the summary.)
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Computational/deployment cost (qualitative claim about lower cost and ease of in...
MAE reductions frequently exceed 30% in many cases when using FutureBoosting.
Reported quantitative results in the paper showing relative MAE reductions (paper text: 'reductions in Mean Absolute Error (MAE) exceeding 30% in many cases'); based on experiments across multiple datasets/horizons.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Relative reduction in Mean Absolute Error (percent)
FutureBoosting consistently outperforms state-of-the-art TSFMs and regression baselines.
Head-to-head experiments in the paper comparing the two-stage FutureBoosting pipeline to standalone TSFM models and common regression baselines (e.g., gradient-boosted trees) across multiple markets and horizons under rolling-origin evaluation.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... MAE (and other forecasting error metrics vs. baselines)
FutureBoosting substantially improves electricity price forecasting.
Empirical evaluation reported in the paper across multiple real-world electricity market datasets and forecasting horizons; comparisons against TSFM-only and regression-only baselines using time-series-aware cross-validation; primary metric: Mean Absolute Error (MAE).
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Mean Absolute Error (MAE) of electricity price forecasts
The paper's mechanism is strategyproof at an epoch granularity under its assumptions (quasilinear utilities, discrete slice items, decision epochs).
Theoretical mechanism-design claim presented in the paper relying on stated assumptions (quasilinear utility, discrete slices, epoch-based decisions). Empirical simulations assume truthful bidding per epoch consistent with this property but do not evaluate inter-epoch strategic deviations.
medium positive Real-Time AI Service Economy: A Framework for Agentic Comput... incentive compatibility per epoch (absence of profitable misreports within an ep...
Scaffold choice creates an economic opportunity for third-party tooling and open-source scaffolding because scaffold effects materially affect performance and reproducibility.
Observed performance differences across scaffolds (up to ~5 percentage points) and sensitivity of results to scaffold selection reported in the study.
medium positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... market_opportunity_for_scaffold_tools (qualitative_based_on_performance_impact)
Replacing the binary meta-analysis assumption (fully homogeneous vs fully heterogeneous) with KL-based adaptive pooling reduces inefficiency or bias that can arise under the binary assumption.
Motivating discussion and theoretical/simulation comparisons in the paper showing cases where standard approaches (fixed-effect or random-effect extremes) are inefficient or biased, and the KL method performs better.
medium positive Redefining shared information: a heterogeneity-adaptive fram... relative estimation efficiency and bias compared to standard meta-analytic extre...
Application to the eICU Collaborative Research Database demonstrates the practical performance of the KL-shrinkage method on a heterogeneous, multi-center clinical dataset.
Real-data empirical application described in the paper using the eICU database; reported performance comparisons (specific dataset size and metrics are provided in the paper's empirical section but are not specified in this summary).
medium positive Redefining shared information: a heterogeneity-adaptive fram... empirical performance on eICU data (e.g., predictive accuracy, estimation MSE, i...
Extensive simulation studies show the KL-shrinkage estimator is robust and versatile across varying degrees and structures of heterogeneity.
Comprehensive simulation experiments reported in the paper that vary heterogeneity magnitude and structure (simulation details reported in the empirical evaluation section; exact sample sizes/configurations given in the paper).
medium positive Redefining shared information: a heterogeneity-adaptive fram... estimator performance metrics in simulations (e.g., MSE, bias, coverage) across ...
Using KL divergence as the penalty is a natural and tractable choice because KL measures relative information between distributions and leads to convenient geometric/algebraic properties.
Argumentation and mathematical exposition in the methods section explaining properties of KL divergence and demonstrating resulting tractability in algebraic derivations.
medium positive Redefining shared information: a heterogeneity-adaptive fram... tractability of derivations / geometric justification (qualitative)
Inferential procedures (e.g., confidence intervals and hypothesis tests) based on the KL-shrinkage approach are asymptotically valid without assuming parameter homogeneity across datasets.
Asymptotic theoretical results in the paper establishing validity (coverage and test properties) even under heterogeneity assumptions; details in asymptotic analysis section.
medium positive Redefining shared information: a heterogeneity-adaptive fram... asymptotic coverage of confidence intervals and Type I error control of hypothes...
Lowering fixed costs via shared resources can enable more entrants and niche innovators (e.g., specialized clinical apps).
Workshop economic implications and participant assertions in breakout sessions and plenary at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... number of market entrants, emergence of niche products, diversity of suppliers
Public investment in shared data and compute as nonrival public goods will reduce duplication, lower entry barriers, and increase total R&D productivity.
Workshop implications for AI economics articulated by participants and authors as a policy recommendation; rationale stated in the summary document (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... duplication of effort, entry barriers (number of entrants), and aggregate R&D pr...
De-risk pathways from lab to clinic via reproducible benchmarks, continuous monitoring, and cross-sector collaborations (academia, industry, clinicians, regulators).
Workshop translation-focused recommendations and roadmap produced by consensus at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... time-to-market, reproducibility metrics, and rate of successful clinical transla...
Enable safe, accountable, and resilient platforms (including virtual–physical healthcare ecosystems) to reduce translational risk.
Workshop recommendations addressing safety, resilience, and virtual–physical ecosystems from cross-disciplinary discussion at NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... measures of translational risk (failure rates in translation, incidents, safety ...
Promote scalable validation ecosystems grounded in objective, continuous measures and physics-informed models.
Workshop validation and safety theme recommendations from panels and consensus-building exercises (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... presence and scalability of validation ecosystems; reliability/robustness metric...
Develop clinic workflow–aware systems and human–AI collaboration frameworks to fit real clinical practice and decision chains.
Stated systems and workflows recommendation from expert panels and clinician participants at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... compatibility of AI-enabled systems with clinical workflows; measures of clinici...
Build shared compute infrastructures tailored to medical workloads and validation needs.
Workshop recommendation from infrastructure-themed sessions and consensus outcomes (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... existence and utilization of shared compute infrastructure for medical R&D (comp...
Sustain investment in shared, standardized data infrastructures (datasets, ontologies, benchmarks) to support medical algorithm–hardware co-design.
Workshop infrastructure call presented during breakout sessions and final recommendations at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... availability and use of standardized medical datasets/ontologies/benchmarks
Principal recommendation: shift from isolated algorithm or hardware efforts to integrated algorithm–hardware–workflow co-design for medical contexts.
Stated workshop recommendation derived from panels and cross-disciplinary consensus at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... alignment and integration of R&D efforts (degree of co-design adoption in projec...
Sustained public investment and new validation, governance, and translation ecosystems are needed to de-risk commercialization and accelerate safe, accountable clinical adoption.
Workshop principal recommendation based on qualitative synthesis of expert judgment from participants and breakout outcomes (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... commercialization risk level and speed/rate of clinical adoption
Enabling next-generation medical technologies requires a fundamental reorientation toward algorithm–hardware co-design that is clinic-aware, validated continuously, and backed by shared data and compute infrastructures.
Consensus recommendation from a two-day NSF workshop (Sept 26–27, 2024) in Pittsburgh convening interdisciplinary participants (academic researchers in algorithms and hardware, clinicians, industry leaders). Methods: expert panels, thematic breakout sessions, cross-disciplinary discussions, consensus-building. Documentation at https://sites.google.com/view/nsfworkshop.
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... successful development and clinical adoption of next-generation medical technolo...
A high-level RL agent dynamically adjusts end-effector interaction forces (contact wrench) in real time based on perception feedback of material location.
Method description: the high-level agent outputs adjustments to interaction force/wrench informed by perception of material location inside the vial; the RL algorithm and detailed observation/action representations are not specified in the summary.
medium positive Learning Adaptive Force Control for Contact-Rich Sample Scra... dynamic adjustment of interaction force/wrench and resulting task performance
A low-level Cartesian impedance controller provides stable, compliant physical interaction for contact stability during scraping.
Control architecture description: the paper uses Cartesian impedance control as the low-level controller intended to handle contact compliance and stability; empirical stability metrics are not given in the summary.
medium positive Learning Adaptive Force Control for Contact-Rich Sample Scra... contact stability / compliant interaction (as enabled by the controller)
The learned policy trained in simulation was successfully transferred to a real Franka Research 3 robot (sim-to-real transfer).
Training in a task-representative simulator followed by deployment on a Franka Research 3 setup in real-world scraping experiments; transfer success is asserted in the paper summary. The evaluation included five material setups on the real robot (exact number of trials per setup not specified).
medium positive Learning Adaptive Force Control for Contact-Rich Sample Scra... sim-to-real transfer success measured via real-world task performance (relative ...
An adaptive control framework that combines a low-level Cartesian impedance controller with a high-level reinforcement learning (RL) agent — guided by perception of material location — enables a robot to learn and adapt the optimal contact wrench for scraping heterogeneous samples in a constrained vial environment.
System design and experiments: the paper describes a two-level control architecture (Cartesian impedance + high-level RL) trained in a task-representative simulation and deployed on a real Franka Research 3 robot. Real-world experiments were performed in a constrained vial scraping task (details on trial counts per condition not provided in the summary).
medium positive Learning Adaptive Force Control for Contact-Rich Sample Scra... ability to learn/adapt optimal contact wrench for successful scraping (task perf...
Automation of routine SE tasks suggests measurable productivity gains at team and firm levels, but quantification requires causal, outcome-based studies (e.g., throughput, defect rates, time-to-market).
Interpretation of literature review findings and survey-reported perceived productivity gains; no causal empirical estimates provided in the paper.
medium positive Artificial Intelligence as a Catalyst for Innovation in Soft... potential productivity metrics (throughput, defect rates, time-to-market) — not ...
Empirical survey evidence shows generally positive perceptions of AI tools among software engineering professionals and growing adoption.
Cross-sectional survey of software engineering professionals asking about current tool usage and perceived benefits (productivity, quality, speed); absolute respondent count and sampling frame not provided in the summary.
medium positive Artificial Intelligence as a Catalyst for Innovation in Soft... self-reported perception of AI tools and self-reported adoption rate
ML enables predictive features in software engineering: effort estimation, defect prediction, work prioritization, and risk forecasting that support Agile planning and continuous delivery.
Literature review of ML-for-SE research and practitioner survey reporting use or expectations of predictive features; specific model performance metrics or dataset sizes not reported in the summary.
medium positive Artificial Intelligence as a Catalyst for Innovation in Soft... availability/use of predictive outputs (e.g., estimated effort, defect risk scor...