Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
Artificial intelligence is emerging as a powerful driver of the circular economy (CE), enabling production systems to become more resource-efficient, less waste-intensive and strategically aligned with sustainability goals.
Mixed-methods assessment combining bibliometric network analysis (196 peer-reviewed articles, 2023–2024) and a systematic review of 104 studies, as reported in the abstract.
AI can reduce production scrap by as much as 30% in documented cases.
Systematic review of studies (paper reports a systematic review of 104 studies); the abstract cites documented cases showing up to 30% reduction in production scrap.
AI can increase resource-efficiency metrics by up to 25% in documented cases.
Systematic review of studies (paper reports a systematic review of 104 studies); the abstract states documented cases showing up to 25% increases in resource-efficiency metrics.
Policy must shift from simply promoting technology to proactively shaping the regulatory and infrastructural ecosystems that govern AI deployment to ensure a just transition.
Policy recommendation based on study’s empirical findings about conditionality and heterogeneity of AI effects; prescriptive statement by authors.
AI markedly improves recognition justice.
Dimension-level analysis of the energy justice index showing significant positive effects of AI on recognition justice component.
AI markedly improves procedural justice.
Dimension-level analysis of the multidimensional energy justice index indicating significant positive effects of AI on procedural justice component.
The benefits of AI for energy justice are concentrated in China’s advanced eastern region.
Spatial heterogeneity analysis reported in the paper showing stronger positive effects in the eastern region compared to other regions.
The positive effect of AI on energy justice is amplified by better digital infrastructure.
Heterogeneity/interaction analysis reported in the paper showing larger AI effects where digital infrastructure is stronger.
The positive effect of AI on energy justice is amplified by stricter environmental regulations.
Heterogeneity/interaction analysis reported in the paper showing stronger AI effects in contexts with stricter environmental regulation.
AI’s positive effect on energy justice is mediated by reduced industrial density.
Mediation/pathway analysis reported in the paper identifying reductions in industrial density as a mechanism.
AI’s positive effect on energy justice is mediated by higher energy prices.
Reported mediation/pathway results indicating higher energy prices are a channel for AI’s impact on the energy justice index.
AI’s positive effect on energy justice is mediated by green innovation.
Mediation/pathway analysis in the paper identifies green innovation as a mechanism through which AI affects energy justice.
AI’s positive effect on energy justice is mediated by improved energy efficiency.
Mediation/pathway analysis reported in paper identifying energy efficiency as one mechanism linking AI adoption to energy justice improvements.
AI adoption significantly enhances overall energy justice.
Panel regression analysis using the constructed energy justice index as outcome; significance reported in findings (based on the stated empirical results across 30 provinces, 2008–2022).
GenAI implementations that are strategically deployed in managed Azure cloud infrastructure provide a positive ROI over time when aligned with business processes, enterprise architecture, and performance metrics.
Conclusion drawn from the paper's mixed-method analysis (quantitative ROI modelling, cost–benefit analysis, and case study synthesis).
Close coupling among Azure OpenAI Service, Azure Machine Learning, and cost governance tooling (FinOps) significantly decreases overall cost of ownership and enhances scalability and compliance.
Architectural analysis of Azure-native GenAI services and cost/governance tooling reported in the paper.
Measurable ROI from GenAI on Azure is mainly driven by improvements in productivity, optimization of operational costs, faster decision making, and increased speed of innovation across business functions.
Reported results from the paper's mixed-method study combining quantitative ROI modelling and cost–benefit analysis plus qualitative synthesis of secondary enterprise case studies.
Microsoft Azure has become one of the first enterprise-scale platforms facilitating GenAI-driven change.
Statement in the paper's abstract asserting Azure's market position as an early enterprise-scale platform for GenAI.
This synthesis bridges the gap between values and practice, offering a policy-ready model for secure and sustainable AI governance.
Authors' concluding claim that their integrated governance risk framework and risk-tiering matrix operationalize ethical principles into auditable technical controls and are policy-ready.
The study aligns its integrated risk-tiering model with Sustainable Development Goal 9 on industry, innovation and infrastructure.
Authors state that the developed integrated risk-tiering model is aligned with SDG 9 as part of the study framing and intended policy relevance.
The analysis produced a heat map of governance frameworks, a co-occurrence network of themes, a cluster analysis of framework coverage and an integrated governance risk framework supported by a risk-tiering matrix.
Authors report specific analytical outputs (heat map, co-occurrence network, cluster analysis) and that they developed an integrated governance risk framework with a risk-tiering matrix based on their analysis.
The technology particularly benefits less experienced practitioners by providing comprehensive starting points for legal research, while experienced attorneys can use it for quality control and initial drafts.
Authors' interpretation of AI outputs from the experiment and reasoning about how those outputs map onto different practitioner needs (qualitative judgment).
The analysis reveals AI’s potential to transform law firm economics by dramatically reducing research time while maintaining analytical quality, though careful attorney oversight remains essential.
Inference from the experimental finding that four AI systems produced substantive analysis comparable to junior-associate work on one transcript and the stated observation about traditional research time (8–40 hours); authors' qualitative judgment about economic implications and need for oversight.
Statutory and regulatory citations proved generally accurate and useful.
Authors' examination of statutory and regulatory references produced by the four AI engines in the experiment, judged to be generally correct and helpful.
All four engines successfully spotted legal issues, assessed claim strengths and weaknesses, and suggested follow-up investigation—tasks that traditionally required eight to forty hours of junior attorney research time.
Observed outputs from the four AI engines on the single transcript showing issue-spotting, strengths/weaknesses assessment, and suggested follow-ups; comparison to typical junior attorney research time (stated as 8–40 hours).
Contemporary generative AI performs sophisticated legal analysis comparable to experienced associates, correctly identifying major employment law claims including ADA violations, Title VII discrimination, OSHA retaliation, FMLA interference, and workers’ compensation retaliation.
Qualitative assessment of outputs from the four AI engines applied to the single hypothetical transcript; comparison against expected legal claims (authors' judgment that outputs matched those an experienced associate would produce).
Four major generative AI engines—DeepSeek, Claude, ChatGPT, and Grok—are useful legal analysis tools for employment law practitioners.
Experimental evaluation in which a single hypothetical client interview transcript was submitted to each of the four AI systems and their outputs were assessed by the authors.
Policy recommendations: increase investment in AI research and expansion; promote AI-driven robotics in key sectors; provide targeted skilling programs for elderly workers; invest in digital infrastructure and the ageing industry; and leverage and develop elderly human capital to support inclusive and sustainable economic development.
Paper discussion/conclusion draws policy implications based on empirical finding that AI adoption mitigates negative ageing effects on GDP growth.
Robustness checks using the old-age dependency ratio as the proxy for ageing deliver consistent results.
Paper reports robustness verification: replacing the primary ageing measure with the old-age dependency ratio yields similar threshold/mitigation findings.
When AI adoption (industrial robot penetration) surpasses a critical threshold, the negative effect of ageing on GDP growth is significantly mitigated.
Threshold interaction result from panel threshold regression: AI adoption (robot penetration) as threshold variable; paper reports that beyond a critical robot-adoption threshold the negative ageing–GDP relationship is significantly weakened.
The specification provides mechanisms for interoperability between institutions.
Design claim in the specification describing mechanisms enabling institutional interoperability.
ACP operates as an additional layer on top of RBAC and Zero Trust, without replacing them.
Design statement in the specification describing ACP's relationship to existing RBAC and Zero Trust architectures.
ACP defines the mechanisms of cryptographic identity, capability-based authorization, deterministic risk evaluation, verifiable chained delegation, transitive revocation, and immutable auditing that a system must implement for autonomous agents to operate under explicit institutional control.
List of mechanisms and required features presented in the specification text.
ACP is the admission control layer between agent intent and system state mutation: before any agent action reaches execution, it must pass a cryptographic admission check that validates identity, capability scope, delegation chain, and policy compliance simultaneously.
Explicit behavioural/design claim in the specification text describing the admission-control role and the checks performed prior to action execution.
ACP is a formal technical specification for governance of autonomous agents in B2B institutional environments.
Stated in the v1.13 specification header/abstract and repository description (specification text and repository link provided).
We propose a multi-agent discussion framework wherein specialized agents collaboratively process extensive product information, distributing cognitive load to alleviate single-agent attention bottlenecks and capturing critical decision factors through structured dialogue.
Method description: multi-agent discussion architecture described and implemented; claimed to distribute cognitive load and reduce single-agent attention bottlenecks (design + reported behavior).
To enhance simulation stability, we implement a mean-field mechanism designed to model the dynamic interactions between the product environment and customer populations, effectively stabilizing sampling processes within high-dimensional decision spaces.
Method description: implementation of a mean-field mechanism within the simulator; paper asserts this design stabilizes sampling in high-dimensional decision spaces (method + reported simulation behavior).
We introduce a preference learning paradigm in which LLMs are economically aligned via post-training on extensive, heterogeneous transaction records across diverse product categories.
Method description: post-training LLMs on heterogeneous transaction records across product categories to align preferences (methodological / training procedure described).
This paper introduces a Multi-Agent Large Language Model-based Economic Sandbox (MALLES) as a unified simulation framework applicable to cross-domain and cross-category scenarios.
Paper description: design and implementation of MALLES, presented as a unified framework leveraging large-scale LLM generalization for cross-domain/cross-category simulation (methodological contribution).
Retrieval substantially improves reasoning over textual fundamentals.
Result reported from the experiments comparing zero-shot prompting to retrieval-augmented settings on fundamentals-focused questions; the paper asserts that retrieval provided substantial improvement for textual fundamentals reasoning.
Artificial intelligence generates positive spatial spillovers for UCEE (positive effects on neighboring regions).
Spatial Durbin model reported in the abstract indicating positive spillover coefficients for artificial intelligence.
The Global Malmquist–Luenberger (GML) index and its efficiency change (EC) and technological change (TC) components stay above 1, indicating sustained efficiency gains dominated by technological progress.
GML index and decomposition results reported in the abstract based on the panel data and GML computation.
Nationally, the average UCEE index rises from about 0.3 to above 0.7 over the sample period.
Computed UCEE index results from the Super-SBM model applied to the panel of 30 provinces (2013–2022) as reported in the abstract.
Recent advances in large language models, tool-using agents, and financial machine learning are shifting financial automation from isolated prediction tasks to integrated decision systems that can perceive information, reason over objectives, and generate or execute actions.
Literature synthesis and conceptual statement in the paper's introduction describing recent technological advances and their effects on financial automation; no empirical sample size reported.
Given these findings, policymakers should favor 'strategic forbearance'—apply existing laws rather than create new regulations that could stifle innovation and diffusion of AI.
Authors' normative policy recommendation based on their interpretation of the reviewed empirical literature (risk–benefit assessment); this is a prescriptive conclusion rather than an empirical finding, so no sample size applies.
Generative AI lowers entry costs for startups, facilitating new firm entry and product development.
Cited empirical and descriptive evidence in the literature review indicating reduced development costs and faster product prototyping enabled by AI tools; the brief does not provide a pooled sample size or a single quantitative estimate.
Generative AI significantly boosts productivity in specific tasks like coding, writing, and customer service—often by 15% to 50%.
Synthesis/review of empirical literature through 2025 (multiple empirical studies of task-level impacts, including field and lab studies and observational analyses); the brief reports aggregate reported effect ranges but does not list a single pooled sample size.
The study contributes to theory by empirically integrating technological, human, and institutional dimensions within a single architectural framework, moving beyond isolated analyses of digital credit.
Author-stated contribution based on combining measures of algorithmic credit systems, human capability, and institutional design and testing interactions in the same regression models.
Moderation analysis reveals that higher levels of human capability and stronger institutional design amplify the positive effects of algorithmic credit systems and mitigate their adverse effects (i.e., they strengthen repayment and resilience effects and reduce financial stress).
Reported moderation analyses using interaction terms in the regression models on the 400-user cross-sectional sample; results described as significant moderation by human capability and institutional design.
Algorithmic credit systems are positively associated with financial resilience.
Regression analyses reported show a positive relationship between algorithmic credit system use and measures of financial resilience in the sample of 400 users.