Evidence (2215 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Innovation
Remove filter
Generative AI is susceptible to social and representational biases and to factual errors or hallucinations; it lacks tacit, contextual domain expertise.
Documented examples in the literature of biased outputs and hallucinations; controlled evaluations and audits of model outputs; qualitative reports highlighting lack of tacit knowledge in domain-specific tasks.
The quality of AI-generated outputs is highly variable; models frequently produce mediocre but plausible-sounding content that requires human filtering.
Multiple user studies and qualitative reports documenting variability in output quality and the need for human curation; outcome measures include error rates, user-rated quality, and time spent vetting.
Algorithmic bias, unequal digital financial literacy, caregiving time constraints, and limited access to personalized solutions can sustain or reproduce gender investment gaps if not addressed.
Synthesis of literature on barriers to financial inclusion and AI fairness concerns, plus platform report observations (review of empirical and conceptual studies; not a single empirical test).
Women statistically exhibit greater risk aversion in some settings compared with men.
Summary of empirical survey and experimental studies on gender differences in risk attitudes discussed in the review (multiple cross‑sectional and lab/field experiments referenced).
Data privacy and cross-border compliance issues arise from using cloud and SECaaS, complicating legal compliance for firms.
Regulatory analyses and compliance reports; documented examples in case studies and industry guidance on cross-border data flows.
The cloud shared responsibility model creates potential ambiguities in liability between providers and customers.
Regulatory guidance, legal analyses, and documented post-incident case studies showing confusion over responsibilities.
China manages the openness–security trade-off through a centralized, developmentalist, techno‑sovereignty approach that privileges coordinated state direction and control.
Qualitative content analysis of national‑level policy texts: 18 Chinese policy documents coded across four analytical dimensions (coordination objectives, institutional actors, governance mechanisms, stakeholder legitimacy).
There is substantial uncertainty in economic forecasts due to possible scale-up failures, regulatory constraints, feedstock price volatility, and path‑dependent lock‑in effects.
Synthesis of technical failure modes, regulatory uncertainty, and sensitivity analyses reported in TEA/LCA literature and economic modeling sections of the review.
Regulatory and biosafety concerns (including environmental release risks and dual‑use issues) increase fixed costs and create entry barriers that shape industry structure and diffusion.
Policy and governance literature reviewed alongside technical case studies; citations of regulatory requirements, biosafety frameworks, and examples of compliance costs affecting project viability.
Engineering and economic challenges—scale‑up hurdles, process robustness, feedstock cost, and downstream purification—limit industrial deployment of many bio-based processes.
Case study TEA/LCA summaries and process reports in the review highlighting scale-up failures or increased costs at larger scales, purification complexity for low‑concentration products, and sensitivity to feedstock prices.
Technical biological limitations—metabolic burden, pathway crosstalk, byproduct formation, and genetic instability—remain major constraints on strain performance and scalability.
Multiple experimental reports and method papers cited in the review documenting decreased growth/productivity due to engineered pathway burden, unintended interactions between pathways, accumulation of byproducts, and genetic mutations during production runs.
The described pipeline is cross-sectional as presented and should be extended to dynamic models (temporal embeddings, change-point detection) for trend or causal analyses.
Method description in summary indicates cross-sectional pipeline; recommendation to extend for temporal/dynamic modeling when analyzing trends or causal effects.
LLMs and corpora may reflect disciplinary, geographic, or language biases; analyses should adjust or stratify accordingly.
Caveat explicitly stated in summary noting potential biases in LLMs and corpora; recommendation to adjust/stratify analyses.
Cluster reliability should be validated (e.g., bootstrap, perturbations) and automatic labels complemented with expert human validation for critical analyses.
Caveat and recommended validation steps provided in summary; suggests bootstrap/perturbation and manual validation as best practices. No empirical stability metrics provided in summary.
Results are sensitive to model and prompt choice; researchers should perform robustness checks across LLMs, soft prompts, and embedding models.
Caveat explicitly stated in the paper summary noting model and prompt sensitivity; recommended validation steps include robustness checks across models and prompts.
Higher complaint volume is significantly associated with near-term stock price declines.
Fixed-effects panel path models estimated on monthly data for 261 financial firms (2018–2023) report statistically significant negative associations between firm–month complaint volume and subsequent abnormal returns.
Consumer complaints—measured by monthly volume, topic composition, and VADER sentiment of complaint narratives—contain behavioral signals that predict short-term abnormal stock returns in U.S. financial firms.
CFPB complaint records matched to 261 publicly traded U.S. financial firms (monthly observations, 2018–2023); analyses use fixed-effects panel path models to link firm–month complaint features (volume, LDA topic prevalences, aggregated VADER sentiment) to firm-level abnormal returns; complementary machine-learning models evaluate out-of-sample predictive performance.
Federated infrastructures introduce adversarial risks (model/data poisoning, inference attacks on updates) that require robust aggregation, anomaly detection, and other defenses.
Threat modeling and taxonomy of adversarial/privacy threats with mapped mitigations (robust aggregation, anomaly detection, DP). Evidence is conceptual and based on standard threat frameworks; no empirical attack/defense experiments reported at scale.
Delayed and sparse feedback (clicks/conversions) in advertising complicates credit assignment and timely model updates, degrading learning unless specific methods for delayed/sparse signals are used.
Analytical discussion of learning dynamics with delayed/sparse labels; conceptual solutions suggested (credit assignment methods). No large-scale empirical evaluation presented.
Non-IID and heterogeneous data distributions across devices and publishers impair convergence and degrade personalization unless addressed with algorithmic adaptations.
Analytical modeling of convergence under non-IID conditions; threat/robustness discussion; prototype/simulation illustrations. This claim is supported by established literature and the paper's analytic treatment.
VIS inherits the limitations of input–output assumptions (fixed coefficients, no price feedbacks); AI-driven structural change may violate those assumptions, so dynamic extensions or calibration are needed.
Paper explicitly cautions about input–output model limitations and the need for dynamic extensions/calibration under structural/technological change.
The paper treats data as a new type of production factor and endogenizes it within the production function.
Theoretical/methodological: the paper constructs a macro-level theoretical model that explicitly includes data as an endogenous input in the production function (no empirical/sample data).
The paper's formalism shows that prompt/system messages shape distributions over possible execution paths (indirect control) but do not evaluate actual partial paths at runtime.
Formal mapping in the paper that treats prompts as shaping prior over paths; conceptual argument and illustrative examples.
These energy reductions are achieved without statistically significant performance loss.
Paper states that performance loss is not statistically significant across the evaluated benchmarks (as reported in the abstract).
The empirical analysis is based on A-share listed companies from 2015 to 2023.
Data description in the paper stating the study sample and time period (A-share listed firms, 2015–2023).
The study uses a mixed-methods approach combining qualitative insights from 1,500 semi-structured customer interviews with quantitative analysis of transaction records, loan repayment histories, and account activity.
Paper states methods explicitly in abstract: 1,500 semi-structured interviews plus quantitative analysis of transaction records, loan repayment histories, and account activity (case-study approach across three platforms).
Three interlocking threads characterize AI for science: (1) AI as research instrument, (2) AI for research infrastructure, and (3) the reshaping of scholarly profiles and incentives by machine-readable metrics.
Conceptual framework presented in the paper; organization of topics rather than empirical measurement. The paper indicates these threads are followed through historical and contemporary examples.
The history of artificial intelligence for scientific discovery is not a two year story about chatbots learning to write papers; it is a sixty year story beginning with DENDRAL (1965).
Historical narrative / literature review citing early systems such as DENDRAL (1965) and subsequent developments in scholarly infrastructure (arXiv, Google Scholar, ORCID). No empirical sample or statistical test reported.
At the macroeconomic level, Kazakhstan's state programs (e.g., 'Digital Kazakhstan' and the Industrial and Innovation Development Program) and international indices (WIPO Global Innovation Index, OECD digital assessments, IMF data) are used to evaluate and position Kazakhstan within the global digital economy.
Macro-level analysis using national programs and international indices described in the article to assess Kazakhstan's digital economy standing.
The paper is intentionally public-safe: it omits proprietary implementation details, training recipes, thresholds, hidden-state instrumentation, deployment procedures, and confidential system design choices, and therefore the contribution is theoretical rather than operational.
Statement about the paper's scope and publication choices; directly asserted by the authors regarding omitted content and the theoretical nature of the contribution.
The paper introduces a constraint-coupled reasoning framework with four elements: bounded transition burden, path-load accumulation, dynamically evolving feasible regions, and a capability-stability coupling condition.
Descriptive/theoretical: the paper explicitly defines and enumerates these four framework elements. This is a claim about the paper's content rather than an empirical finding.
The analysis uses data on 31 million users of Ctrip, China's largest online travel platform, to study "Wendao," an LLM-based AI assistant integrated into the platform.
Descriptive statement in the paper about data source: platform logs/usage data for Ctrip covering 31 million users and the Wendao assistant.
The top three platforms (Claude, ChatGPT, and DeepSeek) receive statistically indistinguishable satisfaction ratings despite vast differences in funding, team size, and benchmark performance.
Statistical comparison of self-reported satisfaction ratings collected via the paper's survey (overall N=388); statistical tests reported in paper (specific test and per-platform n not provided in abstract).
Afriat's theorem guarantees that demand satisfies the Generalized Axiom of Revealed Preference (GARP) if and only if it can be generated by maximizing some utility function subject to a budget constraint.
Theoretical claim citing Afriat's theorem (mathematical result used as foundational justification in the paper).
We fine-tune Amazon Chronos-2, a transformer-based probabilistic time-series model, on synthetic data generated from utility-maximizing agents.
Methods described in the paper: authors report fine-tuning Chronos-2 on synthetically generated time series from utility-maximizing agents (methodological statement).
We conduct an in-depth case study of SWE-bench GitHub issue resolution using two representative models, GPT-5 mini and DeepSeek v3.2.
Descriptive: authors report running an in-depth case study on the SWE-bench GitHub issue resolution dataset using two named models (GPT-5 mini and DeepSeek v3.2).
The authors construct a mean-reverting jump-diffusion stochastic process model and conduct Monte Carlo simulations to evaluate hedging efficiency of the proposed futures contracts.
Methodological claim: explicit description of the mathematical model (mean-reverting jump-diffusion) and simulation method (Monte Carlo) used in the paper.
The paper proposes an original 'Revenue-Sharing as Infrastructure' (RSI) model in which the platform offers its AI infrastructure for free and takes a percentage of the revenues generated by developers' applications, reversing the traditional upstream payment logic.
Theoretical model proposal and conceptual description in the paper; presented as original contribution (no empirical implementation reported).
Recent literature distinguishes three generations of business models: a first generation modeled on cloud computing (pay-per-use), a second characterized by diversification (freemium, subscriptions), and a third, emerging generation exploring multi-layer market architectures with revenue-sharing mechanisms.
Literature review and conceptual synthesis presented in the paper; no empirical study or sample reported.
This research deepens theoretical understanding by integrating CE principles, Industry 4.0 architectures, green innovation theory, and lifecycle assessment into a unified conceptual framework.
Authors' description of theoretical contribution in the abstract, based on their synthesis of the bibliometric and systematic review findings.
This study offers the first comprehensive mixed-methods assessment of how AI transforms industrial production ecosystems in the post-ChatGPT era.
Authors' methodological/novelty claim in the abstract; supported by description of methods (bibliometric analysis of 196 articles and systematic review of 104 studies).
We construct a multidimensional energy justice index to analyze AI’s net effects, pathways, and institutional dependencies.
Methodological statement: authors create an energy justice index (multidimensional) used as dependent variable in empirical analysis.
This study uses a panel dataset for 30 Chinese provinces from 2008 to 2022.
Statement of dataset coverage in the paper: 30 provinces, years 2008–2022 (panel data).
This study uses a mixed-method research design combining quantitative ROI modelling and cost–benefit analysis, qualitative synthesis of secondary enterprise case studies, and architectural analysis of Azure-native GenAI services.
Explicit methodological description in the abstract of the paper.
Ninety-five high-quality studies were analyzed using principal component analysis and k-means clustering.
Paper states screening produced 95 high-quality studies which were subjected to PCA and k-means clustering for analysis.
A systematic literature review of 450 records from major databases was conducted using PRISMA 2020 guidelines.
Statement in the paper describing methods: systematic literature review using PRISMA 2020; initial search returned 450 records from major databases.
The paper presents a formal evolutionary taxonomy of generative AI spanning five eras (1943–present) and analyzes frontier lab dynamics, sovereign AI emergence, and post-training alignment evolution from RLHF through GRPO.
Conceptual taxonomy and historical/organizational analysis provided in the paper. No empirical sample size reported in the excerpt.
The framework extends the Sustainability Index of Han et al. (2025) from hardware-level analysis to ecosystem-level analysis.
Conceptual / methodological extension claimed by the authors referencing Han et al. (2025). No empirical sample size reported in the excerpt.
Classical scaling laws model AI performance as monotonically improving with model size.
Statement about prior literature / modeling assumptions (classical scaling laws). No empirical sample size reported in the excerpt.
The paper derives formal conditions under which the inversion (smaller, orchestrated models outperforming frontier models) holds.
Mathematical derivations and stated sufficient/necessary conditions presented in the paper.