The Commonplace
Home Dashboard Papers Evidence Digests 🎲

Evidence (4137 claims)

Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims

Evidence Matrix

Claim counts by outcome category and direction of finding.

Outcome Positive Negative Mixed Null Total
Other 378 106 59 455 1007
Governance & Regulation 379 176 116 58 739
Research Productivity 240 96 34 294 668
Organizational Efficiency 370 82 63 35 553
Technology Adoption Rate 296 118 66 29 513
Firm Productivity 277 34 68 10 394
AI Safety & Ethics 117 177 44 24 364
Output Quality 244 61 23 26 354
Market Structure 107 123 85 14 334
Decision Quality 168 74 37 19 301
Fiscal & Macroeconomic 75 52 32 21 187
Employment Level 70 32 74 8 186
Skill Acquisition 89 32 39 9 169
Firm Revenue 96 34 22 152
Innovation Output 106 12 21 11 151
Consumer Welfare 70 30 37 7 144
Regulatory Compliance 52 61 13 3 129
Inequality Measures 24 68 31 4 127
Task Allocation 75 11 29 6 121
Training Effectiveness 55 12 12 16 96
Error Rate 42 48 6 96
Worker Satisfaction 45 32 11 6 94
Task Completion Time 78 5 4 2 89
Wages & Compensation 46 13 19 5 83
Team Performance 44 9 15 7 76
Hiring & Recruitment 39 4 6 3 52
Automation Exposure 18 17 9 5 50
Job Displacement 5 31 12 48
Social Protection 21 10 6 2 39
Developer Productivity 29 3 3 1 36
Worker Turnover 10 12 3 25
Skill Obsolescence 3 19 2 24
Creative Output 15 5 3 1 24
Labor Share of Income 10 4 9 23
Clear
Governance Remove filter
Standards and open interoperability reduce vendor lock‑in and transaction costs, widening market access and competition for AI services built on DT data.
Economic reasoning and thematic findings from the literature linking interoperability to reduced transaction costs and broader market participation.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... transaction costs, market access/competition for AI services
Public procurement and large asset owners can act as demand‑pulls to de‑risk early investment and help set standards for DT adoption.
Policy recommendation and examples from literature arguing that large buyers can catalyse adoption; based on case/policy studies in the review.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... effect of public procurement/large owners on adoption and standardisation
Better data continuity across lifecycle phases reduces model training friction and increases the value of historical data for forecasting and causal analysis.
Conceptual argument supported by case evidence in the review showing fragmented data reduces reusability; authors infer benefits for AI training and forecasting.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... model training friction / forecasting value of historical data
DTs generate continuous, high‑resolution operational data (IoT telemetry, usage patterns, maintenance logs) that can substantially improve AI models for predictive maintenance, scheduling, energy optimisation, and logistics.
Logical implication and examples from pilot studies in the review showing richer telemetry and operational datasets produced by DT pilots; argued benefits for AI model inputs.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... AI model performance or potential improvement via richer data inputs
Three core differences by which DTs extend BIM: (1) bidirectional automated physical↔digital data exchange; (2) integration of heterogeneous, real‑time sources (IoT, operational systems); (3) lifecycle continuity preserving data across handovers.
Conceptual synthesis across the literature reviewed (conceptual papers, case studies, pilots) identifying functional distinctions between DT and BIM.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... functional capabilities/features distinguishing DT from BIM
Digital twin (DT) technology can materially improve construction lifecycle performance beyond what Building Information Modelling (BIM) delivers.
Synthesis of 160 reviewed studies including conceptual papers, case studies and pilot deployments reporting performance improvements attributed to DT implementations.
medium positive Digital Twins Across the Asset Lifecycle: Technical, Organis... construction lifecycle performance (overall)
AI/ML–based credit scoring and alternative‑data underwriting reduce information asymmetries, lowering search and monitoring costs and expanding effective credit supply to previously rejected MSMEs and startups.
Analytical argument supported by illustrative case examples and literature on machine‑learning underwriting; the paper notes limited causal identification and time‑sensitivity of fintech products.
medium positive Traditional vs. contemporary financing models for MSMEs and ... information asymmetry reduction, search/monitoring costs, credit supply expansio...
Government action (digital ID, payments rails, credit guarantees, standards, consumer protection) is vital to enable beneficial outcomes from digital finance for MSMEs.
Policy synthesis and comparative evaluation recommending government infrastructure and regulatory measures; conclusion based on institutional analysis rather than experimental evidence.
medium positive Traditional vs. contemporary financing models for MSMEs and ... effectiveness of digital finance ecosystem (enabled by infrastructure and policy...
Case studies indicate FinTech platforms have meaningfully lowered rejection rates and loan turnaround times for underbanked MSMEs, accelerating working‑capital access.
Illustrative case studies of FinTech deployments in India reporting lower rejection rates and faster approvals; paper explicitly notes these cases are illustrative and not nationally representative and do not establish causal identification.
medium positive Traditional vs. contemporary financing models for MSMEs and ... loan rejection rate, loan turnaround time, working‑capital access
Supply‑chain financing can meaningfully unlock working capital for MSMEs by leveraging buyer creditworthiness, yielding high impact for MSMEs embedded in modern supply chains.
Comparative evaluation and illustrative case studies highlighting supply‑chain finance deployments; evidence is demonstrative and not nationally representative or causally identified.
medium positive Traditional vs. contemporary financing models for MSMEs and ... working capital availability for MSMEs, impact magnitude for supply‑chain‑embedd...
Optimal financing outcomes generally come from hybrid approaches that combine formal banking credibility and policy support with FinTech speed and data-driven underwriting.
Comparative evaluation and policy synthesis recommending co‑lending, credit guarantees, and partnerships (banks as liquidity providers combined with FinTech underwriting); based on qualitative tradeoff analysis rather than experimental/causal evidence.
medium positive Traditional vs. contemporary financing models for MSMEs and ... overall financing outcomes (access, cost, risk mitigation)
Compared with traditional bank loans and government schemes, contemporary financing models tend to be faster, more flexible, and more scalable for smaller firms.
Comparative qualitative evaluation across five variables and illustrative case studies showing reduced loan turnaround times and improved accessibility for small firms; no nationally representative sample or causal inference provided.
medium positive Traditional vs. contemporary financing models for MSMEs and ... loan turnaround time, flexibility of repayment, scalability to small firms
Digital technologies — especially FinTech lending platforms, alternative debt/equity products, supply‑chain finance, crowdfunding, and emerging blockchain applications — are materially expanding timely access to capital for Indian MSMEs and startups.
Multi‑criteria comparative evaluation (accessibility, finance cost, flexibility, risk, scalability) plus illustrative case studies of FinTech and alternative financing deployments in India that report faster turnaround and inclusion effects. The paper notes case evidence is illustrative rather than nationally representative and lacks quantitative causal identification.
medium positive Traditional vs. contemporary financing models for MSMEs and ... timely access to capital (availability and speed of financing for MSMEs/startups...
Regulating algorithmic transparency, data practices, and truthful sustainability claims is important to preserve digital trust and efficient market outcomes.
Policy recommendations and economic reasoning presented in the paper; grounded in literature on algorithmic governance and consumer trust; not empirically validated within the paper.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... digital trust; market efficiency; regulatory compliance
Network effects from social proof (reviews, UGC) can create winner-takes-most dynamics, advantaging destinations with stronger digital signals and creating visibility frictions for small/emerging destinations.
Theoretical argument drawing on platform/network effects literature and applied to tourism/social proof; paper cites social-proof constructs and suggests measurement via platform data.
medium positive Sustainable Marketing Framework for Strengthening Consumer T... visibility; market concentration; destination attractiveness
Proprietary experimental datasets and curated metagenomic sequences become valuable intellectual assets that can differentiate commercial offerings.
Paper lists 'Data as an economic asset' and highlights the value of proprietary datasets and curated metagenomes; no market valuation data are included.
medium positive Protein structure prediction powered by artificial intellige... commercial value attributed to proprietary sequence/structure datasets and their...
Faster, cheaper access to structural hypotheses can shorten drug and enzyme discovery cycles, raising R&D productivity and lowering marginal costs of early‑stage screening.
Paper argues this as an implication under 'Productivity and R&D acceleration'; it is presented as an economic consequence rather than demonstrated with empirical cost‑or time‑saving data in the text.
medium positive Protein structure prediction powered by artificial intellige... duration and cost of early‑stage drug/enzyme discovery cycles and marginal cost ...
Practical applications are already emerging, including accelerating target structure availability for small‑molecule and biologics design, guiding enzyme redesign, and interpreting disease mutations.
Paper lists these application areas as emerging uses of AI‑predicted structures; evidence is presented as examples and implications rather than empirical case studies within the text.
medium positive Protein structure prediction powered by artificial intellige... availability of structural hypotheses for drug/biology design, utility in enzyme...
Template‑and‑MSA informed architectures (e.g., RoseTTAFold and AlphaFold family) deliver near‑experimental accuracy for many proteins.
Paper names these architectures and links their inputs (MSAs, templates) to high accuracy against experimental structures (PDB); specific evaluation datasets, protein counts, or error metrics are not enumerated in the text.
medium positive Protein structure prediction powered by artificial intellige... fraction of proteins for which prediction accuracy is near experimental (structu...
Modern AI systems (e.g., AlphaFold variants, RoseTTAFold, single‑sequence models like ESMFold) can approach or reach near‑experimental accuracy while greatly increasing speed and scalability.
Paper cites specific models (AlphaFold family, RoseTTAFold, ESMFold) and describes benchmarking against structural ground truth (PDB / curated experimental structures) and large‑scale pretraining; exact benchmark values or sample sizes are not specified in the text.
medium positive Protein structure prediction powered by artificial intellige... structure prediction accuracy (compared to experimental structures) and inferenc...
Policy levers such as requiring third-party audits, setting interoperability/data standards, subsidizing vetted tools, and investing in formative/performance assessment can align AI-enabled tools with public-interest goals in education.
Policy analysis and recommendations synthesized from assessment theory, comparative case studies, and literature on algorithmic governance; prescriptive (not empirically validated within the paper).
medium positive The Future of Assessment: Rethinking Evaluation in an AI-Ass... policy adoption effects on assessment trustworthiness, equity, and alignment
AI supports new forms of formative feedback and personalization that could be used to improve learning measurement.
Synthesis of literature on adaptive learning systems and formative assessment; examples discussed in country case studies based on policy and secondary sources.
medium positive The Future of Assessment: Rethinking Evaluation in an AI-Ass... quality/effectiveness of formative feedback and personalization
New economic metrics are needed for VR (value of behavioral data streams, cost per reduction in harm, ROI on security investments, welfare metrics capturing trust and adoption).
Authors' recommendations based on identified gaps in the literature and the comparative review of 31 studies; proposed as agenda items rather than empirically developed metrics.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... availability and use of new economic metrics for VR security and privacy (recomm...
VR generates high‑value behavioral and biometric datasets for AI personalization, training, and analytics; firms that extract this data can gain competitive advantages, creating incentives to centralize collection unless counteracted by policy or market forces.
Economic implications inferred by the authors from the literature synthesis and standard industrial‑organization logic; not supported by original empirical market data in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incentives for data centralization and resulting competitive advantage (conceptu...
There is a need for regulatory standards, industry best practices, and ethics‑by‑design approaches; interoperable policy frameworks are recommended to govern VR security and privacy.
Policy and governance recommendations synthesized from multiple reviewed studies and the authors' integration; presented as prescriptive guidance rather than empirically tested interventions.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... adoption of regulatory/standards frameworks and their expected effect on privacy...
An effective defense mix for VR combines technical controls (secure boot, attestation, encrypted communications), AI tools for anomaly detection and policy enforcement, and human‑centered design (transparency, consent, usable controls).
Cross‑study synthesis showing these categories recur as recommended controls in the 31 reviewed papers; authors propose combining them in TVR‑Sec. No deployment or performance metrics provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... overall defense effectiveness from combined controls (theoretical/proposed)
Socio‑Behavioral Safety measures (moderation, design constraints, psycho‑social safeguards) are necessary to prevent harassment, persuasion, addictive interfaces, and other psychological harms in shared virtual spaces.
Qualitative synthesis of social‑behavioral harms and proposed mitigations reported across the literature review (31 studies); comparative evaluation of socio‑technical controls.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... incidence or severity of harassment/manipulation/psychological harms (identified...
User Privacy in VR requires managing highly sensitive behavioral and biometric traces with privacy‑preserving ML approaches (e.g., federated learning, differential privacy), consent mechanisms, and data minimization.
Repeated recommendations across the reviewed studies; authors synthesized privacy‑preserving technical approaches and governance mechanisms from the 31‑study corpus. No primary experiments demonstrating efficacy provided.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... reduction in privacy risk for behavioral/biometric data (proposed, not empirical...
System Integrity defenses should cover hardware, firmware, sensors, and networks to protect against spoofing, device tampering, malware, and supply‑chain attacks.
Aggregated technical recommendations from the literature corpus (31 studies) and the authors' mapping of integrity threats to controls (secure boot, attestation, encrypted communications). No empirical testing of these controls in the paper.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... coverage of integrity‑related threat mitigation (conceptual)
The Three‑Layer VR Security Framework (TVR‑Sec) integrates System Integrity, User Privacy, and Socio‑Behavioral Safety into an adaptive, multidimensional defense architecture for VR systems.
Conceptual synthesis developed by the authors from a comparative literature review of 31 peer‑reviewed studies (2023–2025); framework created by mapping identified vulnerabilities to technical, AI, and human‑centered controls. No empirical validation or deployment testing reported.
medium positive Securing Virtual Reality: Threat Models, Vulnerabilities, an... proposed comprehensiveness/coverage of VR security defenses (conceptual architec...
A coordinated Omnibus that clarifies interactions with the DSA and establishes consistent AI-focused enforcement capacity can reduce regulatory frictions, lower compliance costs, and better align incentives for responsible AI deployment.
Policy recommendation based on comparative mapping and scenario analysis; qualitative argumentation rather than empirical testing.
medium positive The Digital Omnibus and the Future of EU Regulation: Implica... regulatory frictions; compliance costs; incentives for responsible AI deployment
Policy and platform design choices (e.g., provenance metadata, detection/disclosure of AI-generated content, monetization rule alignment) can reinforce or mitigate harms from GenAI-driven creator economies.
Policy recommendations and implications drawn from the qualitative findings across the 377-video sample and normative reasoning; not empirically tested.
medium positive Monetizing Generative AI: YouTubers' Collective Knowledge on... potential mitigation or amplification of harms via platform and policy intervent...
Participants systematically over-bid for privacy-disclosure labels: they were willing to pay more for a privacy-disclosure label than its objective value.
In the experiment (N = 610) participants submitted willingness-to-pay bids for privacy-disclosure labels; observed bids exceeded the objective/reference value, indicating overpayment for transparency information.
medium positive The Data-Dollars Tradeoff: Privacy Harms vs. Economic Risk i... Willingness-to-pay / bidding amounts for privacy-disclosure labels
Policy interventions that raise the reinstatement rate — for example, compensation/transfers to translate AI gains into broad-based purchasing power, faster/stronger fiscal support or automatic stabilizers — can prevent the explosive feedback and stabilize demand.
Model experiments and sensitivity analysis showing that increasing the reinstatement elasticity or direct transfers moves the system from explosive to convergent parameter regions in the calibrated phase-space.
medium positive Abundant Intelligence and Deficient Demand: A Macro-Financia... reinstatement rate, aggregate demand, avoidance of explosive crisis (regime outc...
The iterative, human-in-the-loop agent workflow enables evaluation and refinement of algorithmic clusters into logically consistent, theory-ready categories.
Described iterative loop where agents evaluate clusters, align semantics, and refine outputs; qualitative assessments reported though no formal user-study metrics included in summary.
medium positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... logical consistency and theory-readiness of resulting topic categories
DAFT via LoRA reshapes semantic vector geometry to highlight domain-relevant distinctions without full model retraining.
Methodological claim: LoRA fine-tuning applied to foundation embeddings to adjust vector space; no geometric analyses or quantitative illustrations provided in the summary.
medium positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... changes in semantic vector geometry / enhanced separation of domain-relevant con...
Across six domains THETA outperforms LDA, ETM, and CTM on measures of coherence and domain interpretability.
Reported comparative experiments across six domains using coherence metrics and qualitative/human interpretability assessments against LDA, ETM, CTM. Summary does not provide effect sizes, statistical tests, or per-domain breakdowns.
medium positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... topic coherence scores and human-rated domain interpretability
THETA substantially improves the interpretability and domain-specific coherence of topic/cluster outputs on very large social-text corpora.
Reported experiments comparing THETA to traditional topic models (LDA, ETM, CTM) across six domains; evaluation reportedly used topic coherence metrics and human-in-the-loop interpretability assessments/qualitative comparisons (no numeric results provided in summary).
medium positive THETA: A Textual Hybrid Embedding-based Topic Analysis Frame... topic interpretability and domain-specific topic coherence
Empirical calibrations from Moltbook (formulaic fraction ≈56%, self-reflective posting bias, emotion alignment ≈32.7%) can serve as baseline parameters for economic models and stress-testing market designs that include AI-agent communication.
Reported quantitative metrics from the Moltbook dataset (formulaic comment rate, self-referential topic share vs posting volume, emotional alignment percentages) proposed for use in modeling.
medium positive What Do AI Agents Talk About? Emergent Communication Structu... numerical calibration parameters: formulaic fraction, self-reflective volume bia...
Fear is the leading non-neutral emotion category in agent discourse, primarily reflecting existential uncertainty.
Emotion classification applied to posts and comments (labels: neutral, fear, joy, etc.) across the dataset (~361k posts, ~2.8M comments); frequency counts showing fear as the most frequent non-neutral label; qualitative/lexical inspection indicating existential uncertainty themes.
medium positive What Do AI Agents Talk About? Emergent Communication Structu... frequency (%) of emotion categories; qualitative characterization of fear conten...
For economic and policy analysis, researchers should estimate distributions of effects, account for dynamic adaptation/nonstationarity, pre-register plans, track model versions, and combine RCTs with longitudinal/observational/structural methods.
Implications and recommendations section synthesized from practitioner interviews (n=16) and authors' applied methodological reasoning.
medium positive RCTs & Human Uplift Studies: Methodological Challenges and P... recommended research practices for economically meaningful inference about AI up...
High-stakes deployment, governance, and safety decisions should not rely on single uplift RCTs; they require synthesis across studies, ongoing monitoring, scenario analysis, and explicit uncertainty characterization.
Authors' recommendations drawn from thematic analysis of interview data (n=16) and the mapped validity consequences; policy implications section articulates this guidance.
medium positive RCTs & Human Uplift Studies: Methodological Challenges and P... reliability of decision-making based on uplift evidence
The paper's mechanism is strategyproof at an epoch granularity under its assumptions (quasilinear utilities, discrete slice items, decision epochs).
Theoretical mechanism-design claim presented in the paper relying on stated assumptions (quasilinear utility, discrete slices, epoch-based decisions). Empirical simulations assume truthful bidding per epoch consistent with this property but do not evaluate inter-epoch strategic deviations.
medium positive Real-Time AI Service Economy: A Framework for Agentic Comput... incentive compatibility per epoch (absence of profitable misreports within an ep...
Lowering fixed costs via shared resources can enable more entrants and niche innovators (e.g., specialized clinical apps).
Workshop economic implications and participant assertions in breakout sessions and plenary at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... number of market entrants, emergence of niche products, diversity of suppliers
Public investment in shared data and compute as nonrival public goods will reduce duplication, lower entry barriers, and increase total R&D productivity.
Workshop implications for AI economics articulated by participants and authors as a policy recommendation; rationale stated in the summary document (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... duplication of effort, entry barriers (number of entrants), and aggregate R&D pr...
De-risk pathways from lab to clinic via reproducible benchmarks, continuous monitoring, and cross-sector collaborations (academia, industry, clinicians, regulators).
Workshop translation-focused recommendations and roadmap produced by consensus at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... time-to-market, reproducibility metrics, and rate of successful clinical transla...
Enable safe, accountable, and resilient platforms (including virtual–physical healthcare ecosystems) to reduce translational risk.
Workshop recommendations addressing safety, resilience, and virtual–physical ecosystems from cross-disciplinary discussion at NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... measures of translational risk (failure rates in translation, incidents, safety ...
Promote scalable validation ecosystems grounded in objective, continuous measures and physics-informed models.
Workshop validation and safety theme recommendations from panels and consensus-building exercises (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... presence and scalability of validation ecosystems; reliability/robustness metric...
Develop clinic workflow–aware systems and human–AI collaboration frameworks to fit real clinical practice and decision chains.
Stated systems and workflows recommendation from expert panels and clinician participants at the NSF workshop (Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... compatibility of AI-enabled systems with clinical workflows; measures of clinici...
Build shared compute infrastructures tailored to medical workloads and validation needs.
Workshop recommendation from infrastructure-themed sessions and consensus outcomes (NSF workshop, Sept 26–27, 2024).
medium positive Report for NSF Workshop on Algorithm-Hardware Co-design for ... existence and utilization of shared compute infrastructure for medical R&D (comp...