Evidence (5267 claims)
Adoption
5267 claims
Productivity
4560 claims
Governance
4137 claims
Human-AI Collaboration
3103 claims
Labor Markets
2506 claims
Innovation
2354 claims
Org Design
2340 claims
Skills & Training
1945 claims
Inequality
1322 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 378 | 106 | 59 | 455 | 1007 |
| Governance & Regulation | 379 | 176 | 116 | 58 | 739 |
| Research Productivity | 240 | 96 | 34 | 294 | 668 |
| Organizational Efficiency | 370 | 82 | 63 | 35 | 553 |
| Technology Adoption Rate | 296 | 118 | 66 | 29 | 513 |
| Firm Productivity | 277 | 34 | 68 | 10 | 394 |
| AI Safety & Ethics | 117 | 177 | 44 | 24 | 364 |
| Output Quality | 244 | 61 | 23 | 26 | 354 |
| Market Structure | 107 | 123 | 85 | 14 | 334 |
| Decision Quality | 168 | 74 | 37 | 19 | 301 |
| Fiscal & Macroeconomic | 75 | 52 | 32 | 21 | 187 |
| Employment Level | 70 | 32 | 74 | 8 | 186 |
| Skill Acquisition | 89 | 32 | 39 | 9 | 169 |
| Firm Revenue | 96 | 34 | 22 | — | 152 |
| Innovation Output | 106 | 12 | 21 | 11 | 151 |
| Consumer Welfare | 70 | 30 | 37 | 7 | 144 |
| Regulatory Compliance | 52 | 61 | 13 | 3 | 129 |
| Inequality Measures | 24 | 68 | 31 | 4 | 127 |
| Task Allocation | 75 | 11 | 29 | 6 | 121 |
| Training Effectiveness | 55 | 12 | 12 | 16 | 96 |
| Error Rate | 42 | 48 | 6 | — | 96 |
| Worker Satisfaction | 45 | 32 | 11 | 6 | 94 |
| Task Completion Time | 78 | 5 | 4 | 2 | 89 |
| Wages & Compensation | 46 | 13 | 19 | 5 | 83 |
| Team Performance | 44 | 9 | 15 | 7 | 76 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 18 | 17 | 9 | 5 | 50 |
| Job Displacement | 5 | 31 | 12 | — | 48 |
| Social Protection | 21 | 10 | 6 | 2 | 39 |
| Developer Productivity | 29 | 3 | 3 | 1 | 36 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Skill Obsolescence | 3 | 19 | 2 | — | 24 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Labor Share of Income | 10 | 4 | 9 | — | 23 |
Adoption
Remove filter
Generative AI platforms (Google AI Studio, OpenAI, Anthropic) provide infrastructures (APIs, models) that are transforming the application development ecosystem.
Statement in paper based on literature review and descriptive framing of current platforms; no empirical sample or quantitative test reported.
Financial digital intelligence enhances innovation by strengthening regional industry–university–research collaboration.
Authors report this channel from mechanism/mediation tests using the same empirical sample (5,731 observations, 2015–2022); specific measures of collaboration or identification strategy not provided in excerpt.
Financial digital intelligence enhances innovation by reducing transaction costs.
Mechanism analysis reported by authors on the same panel dataset (5,731 observations, 2015–2022); reduction in transaction costs is presented as a mediating channel (details of measurement/identification not included in excerpt).
Financial digital intelligence enhances innovation by improving corporate information disclosure.
Mechanism analysis reported in paper using same empirical sample (5,731 observations, 2015–2022); authors identify corporate information disclosure as a mediating channel (specific identification strategy not provided in excerpt).
Financial digital intelligence remarkably boosts the innovative development of strategic emerging industries.
Empirical analysis using panel data from 2015–2022 comprising 5,731 observations covering 789 listed companies and 114 prefecture-level cities in China (methods not specified in excerpt; presumably regression analysis on firm/city-level panel).
In production, the system received high satisfaction from both domain experts and developers, with all participants reporting full satisfaction with communication efficiency.
Post-deployment user feedback / satisfaction reports mentioned in paper (no numeric participant count provided).
The automated workflow saved an estimated 979 engineering hours.
Aggregate time-savings estimate reported in paper (derived from per-API time reduction × number of APIs).
The automated workflow reduces per-API development time from approximately 5 hours to under 7 minutes.
Time-per-API comparison reported in paper based on evaluation on spapi (comparison of manual vs automated per-API time).
The automated workflow achieves 93.7% F1 score.
Empirical evaluation on spapi (F1 reported); presumably computed over the evaluated API items/endpoints.
We address this gap through a graph-based workflow optimization approach that progressively replaces manual coordination with LLM-powered services, enabling incremental adoption without disrupting established practices.
Description of proposed method (graph-based workflow + LLM-powered services) and claim of design enabling incremental adoption; supported by subsequent case evaluation.
The work underscores the urgency of tangible actions aimed at closing the AI divide and allowing Africa to actively shape its AI future.
Concluding normative claim in the paper, supported by the paper's synthesis of identified infrastructural and policy barriers and the illustrative ACT tool.
We introduce the Africa AI Compute Tracker (ACT), an interactive map to monitor the availability of AI-ready HPC systems throughout the continent.
Paper reports development and introduction of the ACT tool; the claim is about the authors' own deliverable (an interactive map consolidating HPC availability data).
Sustainable AI adoption requires robust digital foundations through balanced access to compute, data, and the energy that makes it possible (the 'right enablers').
Normative claim grounded in the paper's stated quantitative and qualitative analysis and synthesis of official declarations; presented as a central conceptual conclusion.
Organizational size moderates the adoption–efficiency relationship such that larger firms realize proportionally greater efficiency gains from AI adoption.
Reported moderation effect in the PLS-PM analysis testing organizational size as a moderator of the relationship between AI adoption and recruitment efficiency metrics across sampled organizations.
Procedural fairness perceptions positively predict employee experience outcomes, including organizational commitment, job satisfaction, and employer trust.
PLS-PM paths from procedural fairness perceptions to employee experience measures (organizational commitment, job satisfaction, employer trust) using survey data from HR professionals' reports.
Algorithmic transparency is a strong predictor of procedural fairness perceptions.
PLS-PM results linking measured algorithmic transparency to procedural fairness perceptions in the survey data (n=523 respondents).
AI adoption is positively associated with improvements in quality-of-hire.
PLS-PM association between AI adoption and reported quality-of-hire improvement from HR respondents across sampled organizations.
AI adoption is positively associated with reductions in cost-per-hire.
PLS-PM association between AI adoption and cost-per-hire reduction reported in the survey (firm-level outcomes across sampled organizations).
AI adoption is positively associated with reductions in time-to-hire (recruitment time).
PLS-PM association between AI adoption and recruitment efficiency metrics reported in the survey (firm-level outcomes across sampled organizations).
Top management support and HR digital readiness are both positively associated with organizational AI adoption, with top management support demonstrating greater explanatory power.
PLS-PM tests of organizational antecedents predicting organizational AI adoption using survey responses aggregated to organization level (184 organizations referenced).
Perceived usefulness and perceived ease of use significantly predict AI adoption intention, with perceived usefulness exhibiting a stronger effect.
PLS-PM results on relationships between TAM constructs (perceived usefulness, perceived ease of use) and adoption intention using survey data (n=523).
A large portion of the interactive activities' AI market value (26%) involves transferring information.
Descriptive subcategory statistic: within interactive activities, authors report 26% of market value pertains to information transfer tasks.
Interactive activities (which include both information-based and physical activities) account for 48% of AI market value.
Descriptive aggregate: authors define an 'interactive' category spanning info and physical activities and report it holds 48% of AI market value.
A substantial portion of AI market value (36%) is used in activities that involve creating information.
Descriptive aggregate: subcategory within information-based activities—authors report 36% of market value allocated to 'creating information'.
Most of the AI market value is used in information-based activities (72%).
Descriptive aggregate: authors categorize activities into information-based vs physical and report that 72% of estimated AI market value maps to information-based activities.
There is a highly uneven distribution of AI market value across activities: the top 1.6% of activities account for over 60% of AI market value.
Descriptive statistical result from mapping estimated AI market values to the ~20K activities; authors report concentration metrics (top 1.6% share >60%).
We use the data about AI software and robotic systems to generate graphical displays of how the estimated units and market values of all worldwide AI systems used today are distributed across the work activities that these systems help perform.
Analytic/mapping procedure: authors combine classifications of software (13,275) and robots (20.8M) with market-value estimates to create visual distributions across activities.
We classify a worldwide tally of 20.8 million robotic systems using the developed work-activity ontology.
Empirical classification/counting: authors report mapping 20.8 million robotic systems worldwide to the activity ontology.
We classify descriptions of 13,275 AI software applications using the developed work-activity ontology.
Empirical classification: authors state they mapped 13,275 AI software application descriptions to the ontology.
We disaggregate and then substantially reorganize the approximately 20K activities in the US Department of Labor's O*NET occupational database to produce a comprehensive ontology of work activities.
Methodological: authors report transforming the O*NET activity taxonomy (~20,000 activity-level records) by disaggregation and reorganization into a new ontology.
Models trained in EnterpriseLab remain robust across diverse enterprise benchmarks, including EnterpriseBench (+10%) and CRMArena (+10%).
Benchmark evaluations reported in the paper showing reported +10% improvements on EnterpriseBench and CRMArena relative to baseline; exact baselines, statistical tests, and sample sizes are not specified in the abstract.
8B-parameter models trained in EnterpriseLab reduce inference costs by 8-10x compared to frontier models (implied GPT-4o).
Empirical cost comparison reported in the paper; the abstract states an 8-10x reduction in inference costs for the 8B models trained in EnterpriseLab versus the referenced frontier model(s). Detailed cost accounting and sample sizes not provided in the abstract.
8B-parameter models trained within EnterpriseLab match GPT-4o's performance on complex enterprise workflows.
Empirical evaluation reported in the paper comparing 8B-parameter models trained in EnterpriseLab to GPT-4o on complex enterprise workflows; specific benchmark tests and metrics are referenced but details (sample sizes, exact metrics) are not provided in the abstract.
We validate the platform through EnterpriseArena, an instantiation with 15 applications and 140+ tools across IT, HR, sales, and engineering domains.
Reported instantiation/experimental setup in the paper: EnterpriseArena contains 15 applications and 140+ tools spanning specified domains.
EnterpriseLab provides integrated training pipelines with continuous evaluation.
System/design claim in paper describing integrated training and evaluation tooling as part of the platform.
EnterpriseLab includes automated trajectory synthesis that programmatically generates training data from environment schemas.
System/design claim described in paper; supported by the authors' description of an automated data-generation component.
EnterpriseLab provides a modular environment exposing enterprise applications via a Model Context Protocol, enabling seamless integration of proprietary and open-source tools.
Feature/design claim in paper; supported by implementation details of the 'Model Context Protocol' and reported integration capabilities in the platform description.
We introduce EnterpriseLab, a full-stack platform that unifies tool integration, data generation, and training into a closed-loop framework.
System/design claim describing the contribution of the paper (platform implementation and architecture); supported by the paper's implementation description rather than independent validation.
AIGQ overcomes limitations of traditional HintQ methods (shallow semantics, poor cold-start performance, and low serendipity) that arise from reliance on ID-based matching and co-click heuristics.
Claimed comparative advantage in the abstract; implied support from the paper's offline and online experiments but no detailed quantitative comparisons provided in the abstract.
Extensive offline evaluations and large-scale online A/B experiments on Taobao demonstrate that AIGQ consistently delivers substantial improvements in key business metrics across platform effectiveness and user engagement.
Empirical claim supported by unspecified offline evaluations and large-scale online A/B testing on Taobao as stated in the abstract. The abstract does not report sample sizes, metric names, or numerical effect sizes.
A hybrid offline-online deployment architecture composed of AIGQ-Direct (nearline personalized user-to-query generation) and AIGQ-Think (reasoning-enhanced trigger-to-query mappings) enables meeting strict real-time and low-latency requirements while enriching interest diversity.
System/architecture description in the paper; the abstract states the two-component architecture and its intended operational benefits (real-time/low-latency and increased diversity). The paper references large-scale online deployment and experiments but no concrete latency numbers in the abstract.
IL-GRPO is enhanced by a model-based reward from the online click-through rate (CTR) ranking model.
Methodological detail in the paper: inclusion of a model-based reward signal derived from an online CTR ranking model to augment the policy optimization; described in abstract as part of IL-GRPO's design.
Interest-aware List Group Relative Policy Optimization (IL-GRPO) is a novel policy gradient algorithm with a dual-component reward mechanism that jointly optimizes individual query relevance and global list properties.
Algorithmic contribution described in the paper (policy gradient design and dual-component reward). The abstract states this design and that it is used in experiments; no numeric effect sizes provided in the abstract.
Interest-Aware List Supervised Fine-Tuning (IL-SFT) is a list-level supervised learning approach that constructs training samples through session-aware behavior aggregation and interest-guided re-ranking to faithfully model nuanced user intent.
Methodological description in the paper: definition of IL-SFT and its training sample construction; supported implicitly by offline evaluations and downstream experiments referenced in the paper (no sample size or numeric results given in abstract).
AIGQ is the first end-to-end generative framework for the HintQ (pre-search query recommendation) scenario.
Explicit novelty/assertion in the paper's introduction/abstract claiming AIGQ as the first end-to-end generative framework for HintQ; no numerical experiment used to support the 'first' claim (methodological/positioning claim).
Organizations can design more effective recruitment strategies by signaling AI adoption to increase attractiveness to prospective applicants.
Practical implication drawn from the combined experimental findings (Study 1 N = 145; Study 2 N = 240; total N = 385) showing AI-adoption signals increase organizational attractiveness via perceived innovation ability, particularly for applicants with high AI self-efficacy.
Conceptualizing AI adoption as an organizational signal extends signaling theory to the context of technology-infused recruitment.
Theoretical argumentation in the paper, supported by the two experimental studies (Study 1 and Study 2) that test signaling mechanisms in recruitment contexts.
The positive indirect effect of AI-adoption signals on organizational attractiveness via perceived innovation ability is stronger for job seekers with high AI self-efficacy (Study 2 moderated mediation).
Study 2: moderated mediation model showing AI self-efficacy moderates the mediated relationship; sample size N = 240; participants were active job seekers.
Perceived innovation ability mediates the positive association between AI-adoption signals and organizational attractiveness (Study 2).
Study 2: moderated mediation analysis in an experiment recruiting active job seekers; sample size N = 240; mediation of AI-signal -> perceived innovation ability -> organizational attractiveness was validated.
AI-adoption signals are significantly positively associated with organizational attractiveness (Study 1).
Study 1: scenario-based experiment comparing AI-adoption signal vs no-signal conditions; sample size N = 145.