The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (5267 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
Clear
Adoption Remove filter
AI applications contributed to environmental resilience via water and fertiliser savings and earlier pest detection in some studies.
Reported resource-use metrics and earlier detection outcomes in several reviewed studies and case reports synthesized thematically.
medium positive A systematic review of the economic impact of artificial int... water use, fertiliser use, pest detection timeliness
AI-enabled interventions produced technical efficiency gains through better input targeting and reduced waste.
Studies in the review reporting improvements in input targeting (e.g., fertiliser/pesticide application) and reductions in waste; aggregated in thematic synthesis.
medium positive A systematic review of the economic impact of artificial int... technical efficiency (input targeting accuracy, quantity of inputs used, waste r...
AI deployment has produced measurable supply-chain efficiency improvements and better market integration in reviewed cases.
Synthesis of studies and institutional reports reporting metrics/qualitative evidence on logistics, aggregation, price discovery, and market linkages.
medium positive A systematic review of the economic impact of artificial int... supply-chain efficiency and market integration (e.g., logistics time, transactio...
AI interventions are associated with input cost reductions up to ~25%.
Comparative effect-size synthesis across reviewed studies reporting input cost outcomes (2020–2025).
medium positive A systematic review of the economic impact of artificial int... input costs (% reduction)
Across reviewed studies (2020–2025), AI interventions are associated with yield gains of roughly 12–45%.
Comparative effect-size synthesis of reported impacts across the reviewed studies (>60 articles/reports) that reported yield outcomes.
AI-powered digital agriculture in developing contexts—especially Sub-Saharan Africa—can materially improve productivity, sustainability, and rural livelihoods.
Structured literature review and thematic synthesis of >60 peer-reviewed articles and institutional reports (timeframe 2020–2025) focused primarily on Sub-Saharan Africa and other developing contexts.
medium positive A systematic review of the economic impact of artificial int... aggregate outcomes: productivity, sustainability, rural livelihoods
Standards and open interoperability reduce vendor lock‑in and transaction costs, widening market access and competition for AI services built on DT data.
Economic reasoning and thematic findings from the literature linking interoperability to reduced transaction costs and broader market participation.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... transaction costs, market access/competition for AI services
Public procurement and large asset owners can act as demand‑pulls to de‑risk early investment and help set standards for DT adoption.
Policy recommendation and examples from literature arguing that large buyers can catalyse adoption; based on case/policy studies in the review.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... effect of public procurement/large owners on adoption and standardisation
Better data continuity across lifecycle phases reduces model training friction and increases the value of historical data for forecasting and causal analysis.
Conceptual argument supported by case evidence in the review showing fragmented data reduces reusability; authors infer benefits for AI training and forecasting.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... model training friction / forecasting value of historical data
DTs generate continuous, high‑resolution operational data (IoT telemetry, usage patterns, maintenance logs) that can substantially improve AI models for predictive maintenance, scheduling, energy optimisation, and logistics.
Logical implication and examples from pilot studies in the review showing richer telemetry and operational datasets produced by DT pilots; argued benefits for AI model inputs.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... AI model performance or potential improvement via richer data inputs
Three core differences by which DTs extend BIM: (1) bidirectional automated physical↔digital data exchange; (2) integration of heterogeneous, real‑time sources (IoT, operational systems); (3) lifecycle continuity preserving data across handovers.
Conceptual synthesis across the literature reviewed (conceptual papers, case studies, pilots) identifying functional distinctions between DT and BIM.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... functional capabilities/features distinguishing DT from BIM
Digital twin (DT) technology can materially improve construction lifecycle performance beyond what Building Information Modelling (BIM) delivers.
Synthesis of 160 reviewed studies including conceptual papers, case studies and pilot deployments reporting performance improvements attributed to DT implementations.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... construction lifecycle performance (overall)
ANN analysis ranks information barriers as the most important predictor of organizational inertia.
ANN feature-importance analysis reported in the paper that ranks predictors for inertia, identifying information barriers as the top predictor; methodological specifics (sample size, ANN parameters) are not provided in the abstract.
Artificial neural network (ANN) analysis ranks functional values as the most important predictor of initial trust.
ANN feature-importance analysis reported in the paper that ranks predictors for initial trust, with functional values highest; method described as ANN-based relative importance ranking (details such as network architecture, training sample size, or validation metrics not reported in the abstract).
Human interaction, information, and norm barriers increase organizational inertia (resistance to change) toward GAICS.
Qualitative phase surfaced these barriers; quantitative validation showed statistically significant positive relationships between (a) need for human interaction barriers, (b) information barriers (lack of knowledge/clarity), and (c) norm barriers (cultural/social norms) and organizational inertia.
medium positive Reimagining Stakeholder Engagement Through Generative AI: A ... Organizational inertia / resistance to change regarding GAICS
Functional and instrumental values increase initial trust in GAICS.
Mixed-methods evidence: qualitative exploratory phase identified functional and instrumental value as drivers; quantitative phase (inferential analysis) found positive, statistically significant effects of functional value (system usefulness/quality) and instrumental value (task-related benefits) on initial trust.
AI/ML–based credit scoring and alternative‑data underwriting reduce information asymmetries, lowering search and monitoring costs and expanding effective credit supply to previously rejected MSMEs and startups.
Analytical argument supported by illustrative case examples and literature on machine‑learning underwriting; the paper notes limited causal identification and time‑sensitivity of fintech products.
medium positive Traditional vs. contemporary financing models for MSMEs and ... information asymmetry reduction, search/monitoring costs, credit supply expansio...
Government action (digital ID, payments rails, credit guarantees, standards, consumer protection) is vital to enable beneficial outcomes from digital finance for MSMEs.
Policy synthesis and comparative evaluation recommending government infrastructure and regulatory measures; conclusion based on institutional analysis rather than experimental evidence.
medium positive Traditional vs. contemporary financing models for MSMEs and ... effectiveness of digital finance ecosystem (enabled by infrastructure and policy...
Case studies indicate FinTech platforms have meaningfully lowered rejection rates and loan turnaround times for underbanked MSMEs, accelerating working‑capital access.
Illustrative case studies of FinTech deployments in India reporting lower rejection rates and faster approvals; paper explicitly notes these cases are illustrative and not nationally representative and do not establish causal identification.
medium positive Traditional vs. contemporary financing models for MSMEs and ... loan rejection rate, loan turnaround time, working‑capital access
Supply‑chain financing can meaningfully unlock working capital for MSMEs by leveraging buyer creditworthiness, yielding high impact for MSMEs embedded in modern supply chains.
Comparative evaluation and illustrative case studies highlighting supply‑chain finance deployments; evidence is demonstrative and not nationally representative or causally identified.
medium positive Traditional vs. contemporary financing models for MSMEs and ... working capital availability for MSMEs, impact magnitude for supply‑chain‑embedd...
Optimal financing outcomes generally come from hybrid approaches that combine formal banking credibility and policy support with FinTech speed and data-driven underwriting.
Comparative evaluation and policy synthesis recommending co‑lending, credit guarantees, and partnerships (banks as liquidity providers combined with FinTech underwriting); based on qualitative tradeoff analysis rather than experimental/causal evidence.
medium positive Traditional vs. contemporary financing models for MSMEs and ... overall financing outcomes (access, cost, risk mitigation)
Compared with traditional bank loans and government schemes, contemporary financing models tend to be faster, more flexible, and more scalable for smaller firms.
Comparative qualitative evaluation across five variables and illustrative case studies showing reduced loan turnaround times and improved accessibility for small firms; no nationally representative sample or causal inference provided.
medium positive Traditional vs. contemporary financing models for MSMEs and ... loan turnaround time, flexibility of repayment, scalability to small firms
Digital technologies — especially FinTech lending platforms, alternative debt/equity products, supply‑chain finance, crowdfunding, and emerging blockchain applications — are materially expanding timely access to capital for Indian MSMEs and startups.
Multi‑criteria comparative evaluation (accessibility, finance cost, flexibility, risk, scalability) plus illustrative case studies of FinTech and alternative financing deployments in India that report faster turnaround and inclusion effects. The paper notes case evidence is illustrative rather than nationally representative and lacks quantitative causal identification.
medium positive Traditional vs. contemporary financing models for MSMEs and ... timely access to capital (availability and speed of financing for MSMEs/startups...
Regulating algorithmic transparency, data practices, and truthful sustainability claims is important to preserve digital trust and efficient market outcomes.
Policy recommendations and economic reasoning presented in the paper; grounded in literature on algorithmic governance and consumer trust; not empirically validated within the paper.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... digital trust; market efficiency; regulatory compliance
Network effects from social proof (reviews, UGC) can create winner-takes-most dynamics, advantaging destinations with stronger digital signals and creating visibility frictions for small/emerging destinations.
Theoretical argument drawing on platform/network effects literature and applied to tourism/social proof; paper cites social-proof constructs and suggests measurement via platform data.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... visibility; market concentration; destination attractiveness
Proprietary experimental datasets and curated metagenomic sequences become valuable intellectual assets that can differentiate commercial offerings.
Paper lists 'Data as an economic asset' and highlights the value of proprietary datasets and curated metagenomes; no market valuation data are included.
medium positive Protein structure prediction powered by artificial intellige... commercial value attributed to proprietary sequence/structure datasets and their...
Faster, cheaper access to structural hypotheses can shorten drug and enzyme discovery cycles, raising R&D productivity and lowering marginal costs of early‑stage screening.
Paper argues this as an implication under 'Productivity and R&D acceleration'; it is presented as an economic consequence rather than demonstrated with empirical cost‑or time‑saving data in the text.
medium positive Protein structure prediction powered by artificial intellige... duration and cost of early‑stage drug/enzyme discovery cycles and marginal cost ...
Practical applications are already emerging, including accelerating target structure availability for small‑molecule and biologics design, guiding enzyme redesign, and interpreting disease mutations.
Paper lists these application areas as emerging uses of AI‑predicted structures; evidence is presented as examples and implications rather than empirical case studies within the text.
medium positive Protein structure prediction powered by artificial intellige... availability of structural hypotheses for drug/biology design, utility in enzyme...
Template‑and‑MSA informed architectures (e.g., RoseTTAFold and AlphaFold family) deliver near‑experimental accuracy for many proteins.
Paper names these architectures and links their inputs (MSAs, templates) to high accuracy against experimental structures (PDB); specific evaluation datasets, protein counts, or error metrics are not enumerated in the text.
medium positive Protein structure prediction powered by artificial intellige... fraction of proteins for which prediction accuracy is near experimental (structu...
Modern AI systems (e.g., AlphaFold variants, RoseTTAFold, single‑sequence models like ESMFold) can approach or reach near‑experimental accuracy while greatly increasing speed and scalability.
Paper cites specific models (AlphaFold family, RoseTTAFold, ESMFold) and describes benchmarking against structural ground truth (PDB / curated experimental structures) and large‑scale pretraining; exact benchmark values or sample sizes are not specified in the text.
medium positive Protein structure prediction powered by artificial intellige... structure prediction accuracy (compared to experimental structures) and inferenc...
New economic metrics are needed for VR (value of behavioral data streams, cost per reduction in harm, ROI on security investments, welfare metrics capturing trust and adoption).
Authors' recommendations based on identified gaps in the literature and the comparative review of 31 studies; proposed as agenda items rather than empirically developed metrics.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... availability and use of new economic metrics for VR security and privacy (recomm...
VR generates high‑value behavioral and biometric datasets for AI personalization, training, and analytics; firms that extract this data can gain competitive advantages, creating incentives to centralize collection unless counteracted by policy or market forces.
Economic implications inferred by the authors from the literature synthesis and standard industrial‑organization logic; not supported by original empirical market data in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incentives for data centralization and resulting competitive advantage (conceptu...
There is a need for regulatory standards, industry best practices, and ethics‑by‑design approaches; interoperable policy frameworks are recommended to govern VR security and privacy.
Policy and governance recommendations synthesized from multiple reviewed studies and the authors' integration; presented as prescriptive guidance rather than empirically tested interventions.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... adoption of regulatory/standards frameworks and their expected effect on privacy...
An effective defense mix for VR combines technical controls (secure boot, attestation, encrypted communications), AI tools for anomaly detection and policy enforcement, and human‑centered design (transparency, consent, usable controls).
Cross‑study synthesis showing these categories recur as recommended controls in the 31 reviewed papers; authors propose combining them in TVR‑Sec. No deployment or performance metrics provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... overall defense effectiveness from combined controls (theoretical/proposed)
Socio‑Behavioral Safety measures (moderation, design constraints, psycho‑social safeguards) are necessary to prevent harassment, persuasion, addictive interfaces, and other psychological harms in shared virtual spaces.
Qualitative synthesis of social‑behavioral harms and proposed mitigations reported across the literature review (31 studies); comparative evaluation of socio‑technical controls.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incidence or severity of harassment/manipulation/psychological harms (identified...
User Privacy in VR requires managing highly sensitive behavioral and biometric traces with privacy‑preserving ML approaches (e.g., federated learning, differential privacy), consent mechanisms, and data minimization.
Repeated recommendations across the reviewed studies; authors synthesized privacy‑preserving technical approaches and governance mechanisms from the 31‑study corpus. No primary experiments demonstrating efficacy provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... reduction in privacy risk for behavioral/biometric data (proposed, not empirical...
System Integrity defenses should cover hardware, firmware, sensors, and networks to protect against spoofing, device tampering, malware, and supply‑chain attacks.
Aggregated technical recommendations from the literature corpus (31 studies) and the authors' mapping of integrity threats to controls (secure boot, attestation, encrypted communications). No empirical testing of these controls in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... coverage of integrity‑related threat mitigation (conceptual)
The Three‑Layer VR Security Framework (TVR‑Sec) integrates System Integrity, User Privacy, and Socio‑Behavioral Safety into an adaptive, multidimensional defense architecture for VR systems.
Conceptual synthesis developed by the authors from a comparative literature review of 31 peer‑reviewed studies (2023–2025); framework created by mapping identified vulnerabilities to technical, AI, and human‑centered controls. No empirical validation or deployment testing reported.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... proposed comprehensiveness/coverage of VR security defenses (conceptual architec...
Policy and platform design choices (e.g., provenance metadata, detection/disclosure of AI-generated content, monetization rule alignment) can reinforce or mitigate harms from GenAI-driven creator economies.
Policy recommendations and implications drawn from the qualitative findings across the 377-video sample and normative reasoning; not empirically tested.
medium positive Monetizing Generative AI: YouTubers' Collective Knowledge on... potential mitigation or amplification of harms via platform and policy intervent...
Participants systematically over-bid for privacy-disclosure labels: they were willing to pay more for a privacy-disclosure label than its objective value.
In the experiment (N = 610) participants submitted willingness-to-pay bids for privacy-disclosure labels; observed bids exceeded the objective/reference value, indicating overpayment for transparency information.
medium positive The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk i... Willingness-to-pay / bidding amounts for privacy-disclosure labels
Policy interventions that raise the reinstatement rate — for example, compensation/transfers to translate AI gains into broad-based purchasing power, faster/stronger fiscal support or automatic stabilizers — can prevent the explosive feedback and stabilize demand.
Model experiments and sensitivity analysis showing that increasing the reinstatement elasticity or direct transfers moves the system from explosive to convergent parameter regions in the calibrated phase-space.
medium positive Abundant Intelligence and Deficient Demand: A Macro-Financia... reinstatement rate, aggregate demand, avoidance of explosive crisis (regime outc...
FutureBoosting generalizes across multiple real-world electricity markets and forecast horizons.
Empirical results reported across 'multiple real-world electricity markets' and several forecasting horizons to capture diverse volatility and regime behavior (details on exact markets/horizons are reported in the experiments section of the paper).
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... MAE (and other error metrics) across different market datasets and horizons
The approach preserves the interpretability of downstream regression models while injecting temporal context.
Use of interpretable regression models (e.g., gradient-boosted decision trees) and XAI analyses (SHAP/feature importance) reported in the paper demonstrating interpretability of feature contributions.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Model interpretability (qualitative; feature-level explanations via XAI)
Freezing the TSFM (no joint fine-tuning) makes the framework lightweight and plug-and-play, lowering computational cost relative to joint training.
Architectural design: two-stage pipeline with a frozen TSFM used only to generate forecasted features; paper asserts ability to leverage pretrained TSFMs without end-to-end retraining. (No detailed compute-cost benchmarks given in the summary.)
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Computational/deployment cost (qualitative claim about lower cost and ease of in...
MAE reductions frequently exceed 30% in many cases when using FutureBoosting.
Reported quantitative results in the paper showing relative MAE reductions (paper text: 'reductions in Mean Absolute Error (MAE) exceeding 30% in many cases'); based on experiments across multiple datasets/horizons.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Relative reduction in Mean Absolute Error (percent)
FutureBoosting consistently outperforms state-of-the-art TSFMs and regression baselines.
Head-to-head experiments in the paper comparing the two-stage FutureBoosting pipeline to standalone TSFM models and common regression baselines (e.g., gradient-boosted trees) across multiple markets and horizons under rolling-origin evaluation.
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... MAE (and other forecasting error metrics vs. baselines)
FutureBoosting substantially improves electricity price forecasting.
Empirical evaluation reported in the paper across multiple real-world electricity market datasets and forecasting horizons; comparisons against TSFM-only and regression-only baselines using time-series-aware cross-validation; primary metric: Mean Absolute Error (MAE).
medium positive Regression Models Meet Foundation Models: A Hybrid-AI Approa... Mean Absolute Error (MAE) of electricity price forecasts
The paper's mechanism is strategyproof at an epoch granularity under its assumptions (quasilinear utilities, discrete slice items, decision epochs).
Theoretical mechanism-design claim presented in the paper relying on stated assumptions (quasilinear utility, discrete slices, epoch-based decisions). Empirical simulations assume truthful bidding per epoch consistent with this property but do not evaluate inter-epoch strategic deviations.
medium positive Real-Time AI Service Economy: A Framework for Agentic Comput... incentive compatibility per epoch (absence of profitable misreports within an ep...
Scaffold choice creates an economic opportunity for third-party tooling and open-source scaffolding because scaffold effects materially affect performance and reproducibility.
Observed performance differences across scaffolds (up to ~5 percentage points) and sensitivity of results to scaffold selection reported in the study.
medium positive Re-Evaluating EVMBench: Are AI Agents Ready for Smart Contra... market_opportunity_for_scaffold_tools (qualitative_based_on_performance_impact)
Replacing the binary meta-analysis assumption (fully homogeneous vs fully heterogeneous) with KL-based adaptive pooling reduces inefficiency or bias that can arise under the binary assumption.
Motivating discussion and theoretical/simulation comparisons in the paper showing cases where standard approaches (fixed-effect or random-effect extremes) are inefficient or biased, and the KL method performs better.
medium positive Redefining shared information: a heterogeneity-adaptive fram... relative estimation efficiency and bias compared to standard meta-analytic extre...