Evidence (5539 claims)
Adoption
5539 claims
Productivity
4793 claims
Governance
4333 claims
Human-AI Collaboration
3326 claims
Labor Markets
2657 claims
Innovation
2510 claims
Org Design
2469 claims
Skills & Training
2017 claims
Inequality
1378 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 402 | 112 | 67 | 480 | 1076 |
| Governance & Regulation | 402 | 192 | 122 | 62 | 790 |
| Research Productivity | 249 | 98 | 34 | 311 | 697 |
| Organizational Efficiency | 395 | 95 | 70 | 40 | 603 |
| Technology Adoption Rate | 321 | 126 | 73 | 39 | 564 |
| Firm Productivity | 306 | 39 | 70 | 12 | 432 |
| Output Quality | 256 | 66 | 25 | 28 | 375 |
| AI Safety & Ethics | 116 | 177 | 44 | 24 | 363 |
| Market Structure | 107 | 128 | 85 | 14 | 339 |
| Decision Quality | 177 | 76 | 38 | 20 | 315 |
| Fiscal & Macroeconomic | 89 | 58 | 33 | 22 | 209 |
| Employment Level | 77 | 34 | 80 | 9 | 202 |
| Skill Acquisition | 92 | 33 | 40 | 9 | 174 |
| Innovation Output | 120 | 12 | 23 | 12 | 168 |
| Firm Revenue | 98 | 34 | 22 | — | 154 |
| Consumer Welfare | 73 | 31 | 37 | 7 | 148 |
| Task Allocation | 84 | 16 | 33 | 7 | 140 |
| Inequality Measures | 25 | 77 | 32 | 5 | 139 |
| Regulatory Compliance | 54 | 63 | 13 | 3 | 133 |
| Error Rate | 44 | 51 | 6 | — | 101 |
| Task Completion Time | 88 | 5 | 4 | 3 | 100 |
| Training Effectiveness | 58 | 12 | 12 | 16 | 99 |
| Worker Satisfaction | 47 | 32 | 11 | 7 | 97 |
| Wages & Compensation | 53 | 15 | 20 | 5 | 93 |
| Team Performance | 47 | 12 | 15 | 7 | 82 |
| Automation Exposure | 24 | 22 | 9 | 6 | 62 |
| Job Displacement | 6 | 38 | 13 | — | 57 |
| Hiring & Recruitment | 41 | 4 | 6 | 3 | 54 |
| Developer Productivity | 34 | 4 | 3 | 1 | 42 |
| Social Protection | 22 | 10 | 6 | 2 | 40 |
| Creative Output | 16 | 7 | 5 | 1 | 29 |
| Labor Share of Income | 12 | 5 | 9 | — | 26 |
| Skill Obsolescence | 3 | 20 | 2 | — | 25 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
Adoption
Remove filter
Automation in Japanese manufacturing increased even during periods of slow productivity growth.
Empirical finding from applying the framework to industry-level data in Japanese manufacturing; comparison of inferred automation trends with observed productivity growth periods (exact sample/time not provided in the summary).
Applying the framework to Japanese manufacturing industries shows that automation increased through capital deepening.
Empirical application of the theoretical framework to Japanese manufacturing industries (industry-level analysis); estimation/inference using industry macro observables. (Paper states result; exact sample size/time span not provided in the summary.)
The model provides a transparent mapping from standard macroeconomic observables (capital-labor ratio, output per worker, elasticity of substitution) into the degree of automation, allowing automation to be measured without relying on technology-specific indicators.
Theoretical mapping derived from the CES structure that links observable macro variables to the endogenous degree of automation; methodological claim about inference procedure.
Aggregating task-level decisions generates a CES production function in which the economy-wide degree of automation emerges endogenously.
Analytical derivation in the paper: aggregation of task-level adoption decisions yields a CES aggregate production function with endogenous automation parameter.
The degree of automation is defined as the share of tasks performed by capital rather than labor.
Explicit model definition provided in the paper (conceptual/theoretical definition).
The degree of automation in the aggregate economy emerges endogenously as an equilibrium outcome and can be inferred from standard macroeconomic data.
Theoretical development in a task-based production framework with endogenous technology adoption; mapping from model to observable macro variables (capital-labor ratio, output per worker, elasticity of substitution).
The results of this regional research outline a multi-dimensional policy roadmap that dives deep into the region’s current capabilities and the hurdles it faces in catching up with the AI revolution from a governance and policy perspective, presenting them in a practical framework for public sector leaders.
Report summary claiming that the study's results produce a comprehensive roadmap and practical framework (content description).
This executive report provides a roadmap for establishing an AI governance infrastructure through a set of strategic policy recommendations across seven key pillars.
Document assertion describing the content and structure of the report (authors' deliverable).
The reality of limited AI governance capacity calls for a series of policy interventions at both local and regional levels to empower the AI ecosystem in the Arab region.
Authors' policy recommendation derived from the regional study and synthesis of findings.
A governance model linking 'trustworthy AI' practices to competitive advantage yields reduced uncertainty, faster deployment cycles, and higher stakeholder trust.
Central claim of the paper tying the proposed AIGSF to business benefits; supported by conceptual linkage and illustrative examples rather than quantified empirical evidence or controlled evaluation.
Case illustrations across hiring, credit, consumer services, and generative AI draw lessons on controls such as model documentation, algorithmic audits, impact assessments, and human-in-the-loop oversight.
Paper includes qualitative case illustrations in the listed domains to demonstrate governance controls; these are presented as examples and lessons rather than as systematic empirical studies (no sample sizes reported).
The paper develops an AI Governance Strategic Framework (AIGSF) and an implementation roadmap that connect ethical accountability, regulatory readiness, cybersecurity resilience, and performance outcomes.
Paper contribution described as an integrative conceptual framework and roadmap; supported by theoretical grounding and illustrative cases rather than empirical validation; no sample size provided.
AI governance should be treated as a strategic governance function—anchored in board oversight and enterprise risk management—rather than a narrow technical or compliance task.
Central normative recommendation and thesis of the paper; derived from an integrative conceptual framework grounded in corporate governance theory, ERM, and emerging regulation. No empirical testing or sample reported.
AI has moved from a peripheral digital capability to a central driver of corporate strategy, reshaping decision-making, customer engagement, operations, and risk exposure.
Statement presented in the paper's introduction and motivation; supported by integrative conceptual design and literature grounding (theory and descriptive citations). No empirical sample or quantitative analysis reported.
These results demonstrate a practical path toward high-precision, low-latency text-to-SQL applications using domain-specialized, self-hosted language models in large-scale production environments.
Conclusion drawn by the authors based on their implementation, token reduction, and reported accuracy/latency-related claims; generalization to large-scale production is asserted but not supported by detailed production deployment metrics in the excerpt.
The resulting system achieves 98.4% execution success and 92.5% semantic accuracy, substantially outperforming a prompt-engineered baseline using Google's Gemini Flash 2.0 (95.6% execution, 89.4% semantic accuracy).
Reported empirical evaluation comparing the authors' system to a prompt-engineered baseline (Gemini Flash 2.0) with explicit performance percentages for execution success and semantic accuracy; no sample size, test set composition, statistical significance, or evaluation protocol provided in the excerpt.
The approach replaces costly external API calls with efficient local inference.
System design claim: the model is self-hosted and performs local inference instead of using external API-based LLM calls; no cost accounting or latency benchmarks provided in the excerpt.
This reduces input tokens by over 99%, from a 17k-token baseline to fewer than 100.
Reported measurement comparing input token counts before and after applying their approach (explicit numerical baseline and resulting counts provided); no sample size or distribution of token counts reported.
A novel two-phase supervised fine-tuning approach enables the model to internalize the entire database schema, eliminating the need for long-context prompts.
Methodological description (two-phase supervised fine-tuning) and claim that this internalization removes reliance on long-context prompts; no detailed experimental protocol or sample size provided in the excerpt.
We present a specialized, self-hosted 8B-parameter model designed for a conversational bot in CriQ, a sister app to Dream11 that answers user queries about cricket statistics.
Stated implementation detail in the paper describing the model architecture and deployment target (CriQ conversational bot). No experimental sample size reported for this statement.
Those extended-model equilibria also show increasing concentration consistent with power-law-like distributions (i.e., winner-take-most / superstar effects).
Theoretical model combining quality heterogeneity and reinforcement dynamics that yields equilibrium distributions with heavy tails; argument and formalization presented in the paper; no empirical testing reported.
Even as the number of producers increases and average attention per producer falls, total output expands (production scales elastically).
Same formal theoretical model (analytical result): production scales elastically in the model despite finite attention; no empirical validation provided.
Mechanisms identified — network structure evolution and increased relational embeddedness — contribute to a broader understanding of how digital transformation shapes innovation dynamics across geographical boundaries in a globalized knowledge economy.
Synthesis of empirical network evolution results and mediation/structural analyses from the 2011–2021 dataset of digital transformation indicators and patent collaboration networks among cities and firms.
These results provide empirical evidence from a major emerging economy (China) that can offer insights to inform policies and strategies in other regions undergoing digital transition.
Generalization claim based on empirical findings from the 2011–2021 analysis of A-share listed companies' digital transformation and patent collaboration patterns in China.
When the volume of digital patent applications surpasses a certain threshold, the positive effect of digital transformation on the quality of cross-regional collaborative innovation accelerates (nonlinear threshold effect).
Threshold regression / nonlinear analysis relating counts of digital patent applications to the marginal effect of digital transformation on collaborative innovation quality, using 2011–2021 patent and digitalization data from A-share listed firms.
Advancement of digital transformation positively contributes to both the quality and the quantity of cross-regional cooperative innovation.
Empirical econometric analysis (panel regressions) linking measures of corporate/urban digital transformation to indicators of cross-regional cooperative innovation quality and counts, using A-share listed companies' digital transformation indicators and patent collaboration data, 2011–2021.
China’s urban collaborative innovation network demonstrates a notable quadrilateral spatial structure and has evolved toward a multicenter pattern over time.
Spatio-temporal network analysis based on the same 2011–2021 dataset of digital transformation indicators and patent/co-patent links among cities inferred from A-share listed companies' patent data.
The cooperative innovation network exhibits pronounced small-world characteristics.
Network analysis of cross-regional collaborative innovation using digital transformation and patent data from A-share listed companies on the Shanghai and Shenzhen stock exchanges (2011–2021).
This work offers a cost-effective, scientifically grounded blueprint for ubiquitous AI education.
Authors' concluding statement based on the SOP, low labor/hardware claims, and the pilot exam results showing high accuracy with the Shadow Agent in newer 32B models.
This suggests that structured reasoning guidance (as implemented by the Shadow Agent) is the key to unlocking the latent power of modern small language models.
Interpretive claim based on the pilot study's observed large gains for newer 32B models when using Shadow Agent guidance versus smaller gains for older models and stagnation in baselines.
In contrast, older models see only modest gains (~10%) from the Shadow Agent guidance.
Same pilot study reporting that older (unspecified) model generations showed only about a ~10% improvement when using the Shadow Agent versus baseline. No exact accuracy numbers, sample size, or model names provided.
The Shadow Agent, which provides structured reasoning guidance, triggers a massive capability surge in newer 32B models, boosting performance from 74% (Naive RAG) to mastery level (90%).
Pilot study on a full graduate-level final exam reported comparisons between Naive RAG (74% accuracy) and the Shadow Agent (90% accuracy) for newer 32B models. Specific number of exam items or statistical testing not stated.
We used a Vision-Language Model data cleaning strategy and a novel Shadow-RAG architecture as core technical components of the localization pipeline.
Methodological description in the practitioner report; the paper explicitly names these two techniques as the data-cleaning and architectural contributions used to create the tutor.
Using a Vision-Language Model data cleaning strategy and a novel Shadow-RAG architecture, we localized a graduate-level Applied Mathematics tutor using only 3 person-days of non-expert labor and open-weights 32B models deployable on a single consumer-grade GPU.
Practitioner report describing a replicable Standard Operating Procedure (SOP); method claims include Vision-Language Model data cleaning and Shadow-RAG; deployment described as using open-weight 32B models on a single consumer GPU; labor reported as '3 person-days of non-expert labor'. No sample size or independent replication reported in text.
AI adoption and the associated improved governance lead to higher total factor productivity (TFP).
Empirical analysis showing a positive association between firm-level AI application index and measures of total factor productivity in the 2010–2023 Chinese A-share panel.
AI adoption and the associated improved governance lead to a lower cost of debt financing for firms.
Empirical tests linking firm-level AI application and governance improvements to measures of debt financing costs (e.g., interest rates on debt, financing spreads) in the Chinese A-share firm sample.
The governance risk-mitigation effects of AI operate through enhancing external monitoring.
Mechanism analyses showing that AI adoption is associated with measures of stronger external monitoring (e.g., analyst coverage, media scrutiny, regulator activity) in the firm-year panel, linking that channel to reduced misconduct.
The governance risk-mitigation effects of AI operate through strengthening internal control capacity.
Mechanism analyses showing that higher AI application is associated with improved internal control measures (as reported by firms or regulatory/financial-control indicators) in the dataset of Chinese A-share firms.
The governance risk-mitigation effects of AI operate through lowering agency costs.
Mechanism analyses reported by authors linking AI adoption to reductions in measures interpreted as agency costs (e.g., agency-cost proxies, corporate governance metrics) in the same firm-year panel.
AI application significantly reduces the monetary amount of penalties associated with executive misconduct.
Regression analyses on monetary penalty data for Chinese A-share firms (2010–2023) showing a statistically significant negative relationship between firm AI application index and penalty amounts.
AI application significantly reduces the frequency (number) of violations by executives.
Empirical frequency/regression analyses on the firm-year panel of Chinese A-share firms using the AI application index; authors report robust reductions in the number/frequency of violations conditional on AI adoption.
AI application significantly reduces the incidence of executive misconduct.
Empirical analysis on Chinese A-share listed firms (2010–2023) using the constructed firm-level AI application index; reported significant negative association between AI application and whether a firm experiences executive misconduct (incidence).
Using Chinese A-share firms listed in Shanghai and Shenzhen from 2010 to 2023, we construct a firm-level AI application index and examine whether and how AI adoption mitigates executive misconduct.
Authors report building a firm-level AI application index and applying it to Chinese A-share listed firms (Shanghai and Shenzhen) over 2010–2023 to study links between AI adoption and executive misconduct (method: panel analysis using firm-year observations).
Applying our framework to product listings on Etsy, we find that following ChatGPT's release, listings have significantly more machine-usable information about product selection, consistent with systematic mecha-nudging.
Empirical analysis of Etsy product listings comparing measures of 'machine-usable information about product selection' before and after ChatGPT's release. (The abstract states a significant increase; full paper presumably contains dataset details and statistical tests, but sample size and exact estimates are not provided in the excerpt.)
Adoption of AI can reduce procurement costs by 15.7%.
Field survey data (n=326) and regression analysis; authors report a 15.7% reduction in procurement costs associated with AI adoption.
Adoption of AI can shorten the procurement decision-making cycle by 21.3%.
Field survey data (n=326) analyzed (authors report a 21.3% reduction in procurement decision-making cycle associated with AI adoption); method described as questionnaire surveys and multiple linear regression.
Supplier AI capability positively drives AI adoption in procurement (β = 0.28, p < 0.01).
Same questionnaire survey (n=326) and multiple linear regression analysis; reported coefficient β=0.28 with p<0.01.
Perceived usefulness positively drives AI adoption in procurement (β = 0.32, p < 0.01).
Questionnaire survey of 326 procurement managers/supply chain managers in SMEs (Yangtze River Delta and Pearl River Delta) analyzed using multiple linear regression; reported coefficient β=0.32 with p<0.01.
The paper provides recommendations for designing strategic indicators to drive adoption, foster innovation, and objectively assess whether digital tools are delivering top-line impact.
Descriptive claim about the content of the perspective article (the authors state they provide these recommendations); the excerpt itself summarizes this contribution.
The shift from expert-driven computer-aided drug design (CADD) to semiautonomous AI necessitates a new framework of impact-oriented KPIs.
Stated by the EFMC2 community authors as a normative conclusion in the perspective piece; based on the characterisation of a technological shift rather than on presented empirical tests in the excerpt.