Evidence (1286 claims)
Adoption
5126 claims
Productivity
4409 claims
Governance
4049 claims
Human-AI Collaboration
2954 claims
Labor Markets
2432 claims
Org Design
2273 claims
Innovation
2215 claims
Skills & Training
1902 claims
Inequality
1286 claims
Evidence Matrix
Claim counts by outcome category and direction of finding.
| Outcome | Positive | Negative | Mixed | Null | Total |
|---|---|---|---|---|---|
| Other | 369 | 105 | 58 | 432 | 972 |
| Governance & Regulation | 365 | 171 | 113 | 54 | 713 |
| Research Productivity | 229 | 95 | 33 | 294 | 655 |
| Organizational Efficiency | 354 | 82 | 58 | 34 | 531 |
| Technology Adoption Rate | 277 | 115 | 63 | 27 | 486 |
| Firm Productivity | 273 | 33 | 68 | 10 | 389 |
| AI Safety & Ethics | 112 | 177 | 43 | 24 | 358 |
| Output Quality | 228 | 61 | 23 | 25 | 337 |
| Market Structure | 105 | 118 | 81 | 14 | 323 |
| Decision Quality | 154 | 68 | 33 | 17 | 275 |
| Employment Level | 68 | 32 | 74 | 8 | 184 |
| Fiscal & Macroeconomic | 74 | 52 | 32 | 21 | 183 |
| Skill Acquisition | 85 | 31 | 38 | 9 | 163 |
| Firm Revenue | 96 | 30 | 22 | — | 148 |
| Innovation Output | 100 | 11 | 20 | 11 | 143 |
| Consumer Welfare | 66 | 29 | 35 | 7 | 137 |
| Regulatory Compliance | 51 | 61 | 13 | 3 | 128 |
| Inequality Measures | 24 | 66 | 31 | 4 | 125 |
| Task Allocation | 64 | 6 | 28 | 6 | 104 |
| Error Rate | 42 | 47 | 6 | — | 95 |
| Training Effectiveness | 55 | 12 | 10 | 16 | 93 |
| Worker Satisfaction | 42 | 32 | 11 | 6 | 91 |
| Task Completion Time | 71 | 5 | 3 | 1 | 80 |
| Wages & Compensation | 38 | 13 | 19 | 4 | 74 |
| Team Performance | 41 | 8 | 15 | 7 | 72 |
| Hiring & Recruitment | 39 | 4 | 6 | 3 | 52 |
| Automation Exposure | 17 | 15 | 9 | 5 | 46 |
| Job Displacement | 5 | 28 | 12 | — | 45 |
| Social Protection | 18 | 8 | 6 | 1 | 33 |
| Developer Productivity | 25 | 1 | 2 | 1 | 29 |
| Worker Turnover | 10 | 12 | — | 3 | 25 |
| Creative Output | 15 | 5 | 3 | 1 | 24 |
| Skill Obsolescence | 3 | 18 | 2 | — | 23 |
| Labor Share of Income | 7 | 4 | 9 | — | 20 |
Inequality
Remove filter
Predictive AI models can facilitate climate-resilient decision-making in agriculture.
Reported as a finding from the Thailand AI-supported smart agriculture finance case study, supported by qualitative evidence and (implied) predictive-model-driven finance decisions noted in the abstract.
Women exhibit higher adoption and savings patterns on AI-enabled financial platforms.
Abstract reports gendered impacts derived from 1,500 semi-structured customer interviews plus account-activity data across the three case studies, noting higher adoption and savings for women.
AI-enabled platforms reduce vulnerability to climate-related income shocks.
Abstract claims findings that AI-enabled platforms reduce vulnerability to climate-related income shocks based on case studies (including smart agriculture finance in Thailand), interviews and transaction/loan data analysis.
AI-enabled platforms promote savings behavior among customers.
Abstract reports findings based on mixed-methods: qualitative interviews (1,500) and quantitative account-activity analysis indicating increased savings behavior on AI-enabled platforms.
AI-enabled platforms significantly improve credit access for low-income and rural customers in the case-study contexts.
Quantitative analysis of transaction records and loan repayment histories combined with qualitative insights from 1,500 interviews across three case studies (M-KOPA, TymeBank, and smart agriculture finance in Thailand) as described in the abstract.
These systems are now being widely used to produce software, conduct business activities, and automate everyday personal tasks.
Authors' statement describing observed applications and uses (policy/legal analysis; specific empirical data or sample size not provided in excerpt).
AI agents have entered the mainstream.
Authors' declarative statement based on their review of recent developments and observed uptake (policy/legal analysis in the paper). No empirical sample size reported in excerpt.
Those extended-model equilibria also show increasing concentration consistent with power-law-like distributions (i.e., winner-take-most / superstar effects).
Theoretical model combining quality heterogeneity and reinforcement dynamics that yields equilibrium distributions with heavy tails; argument and formalization presented in the paper; no empirical testing reported.
Even as the number of producers increases and average attention per producer falls, total output expands (production scales elastically).
Same formal theoretical model (analytical result): production scales elastically in the model despite finite attention; no empirical validation provided.
By enabling developers without initial capital to participate in the digital economy, RSI could unlock the 'latent jobs dividend' in low-income countries and help address local challenges in health, agriculture, and services.
Societal-impact argument in the paper linking the RSI model to potential employment gains and localized solutions; speculative extrapolation, no empirical employment estimates or pilot studies reported.
The RSI model could stimulate innovation in the ecosystem.
Argument based on lowered financial barriers and incentive structures from the paper's theoretical comparative analysis; no empirical measures of innovation provided.
The RSI model aligns stakeholder interests (platforms and developers).
Theoretical argument and incentive-alignment reasoning in the paper's comparative framework; no empirical validation presented.
A comparative analysis in the paper shows that the RSI model lowers entry barriers for developers.
Detailed comparative (theoretical) analysis within the paper contrasting existing models and RSI; no empirical trial, sample, or randomized test reported.
Generative AI platforms (Google AI Studio, OpenAI, Anthropic) provide infrastructures (APIs, models) that are transforming the application development ecosystem.
Statement in paper based on literature review and descriptive framing of current platforms; no empirical sample or quantitative test reported.
The study recommends establishing more accessible AI systems for decision-making, improving digital literacy programmes through regulatory support, and creating special resources for communities that lack essential services.
Authors' policy/research recommendations derived from the study's mixed-methods findings.
AI functions as an essential instrument for advancing financial inclusion in Zimbabwe by enhancing banking access, operational efficiency, and the security of banking services.
Synthesis of mixed-methods findings (survey n=293; interviews n=12) indicating improvements in access, efficiency, and security associated with AI use in banks.
Anomaly detection systems had the most significant impact on financial outcomes, explaining 62.3% of the outcome differences produced by AI technologies.
Quantitative analysis reported in the paper (presumably regression/variance decomposition) based on the survey data (n=293) showing anomaly detection explains 62.3% of variance in the measured financial outcome.
Organisations strongly supported AI systems for decision-making and fraud detection.
Survey responses and/or summary statistics from the questionnaire (n=293) indicating organisational support for AI in decision-making and fraud detection.
AI enables loan processing and makes financial products more accessible through three main functions: usability, safety in transactions, and financial literacy training.
Findings reported from the study's mixed-methods analysis (survey n=293 and interviews n=12) describing perceived AI functions in banking.
Policy must shift from simply promoting technology to proactively shaping the regulatory and infrastructural ecosystems that govern AI deployment to ensure a just transition.
Policy recommendation based on study’s empirical findings about conditionality and heterogeneity of AI effects; prescriptive statement by authors.
AI markedly improves recognition justice.
Dimension-level analysis of the energy justice index showing significant positive effects of AI on recognition justice component.
AI markedly improves procedural justice.
Dimension-level analysis of the multidimensional energy justice index indicating significant positive effects of AI on procedural justice component.
The benefits of AI for energy justice are concentrated in China’s advanced eastern region.
Spatial heterogeneity analysis reported in the paper showing stronger positive effects in the eastern region compared to other regions.
The positive effect of AI on energy justice is amplified by better digital infrastructure.
Heterogeneity/interaction analysis reported in the paper showing larger AI effects where digital infrastructure is stronger.
The positive effect of AI on energy justice is amplified by stricter environmental regulations.
Heterogeneity/interaction analysis reported in the paper showing stronger AI effects in contexts with stricter environmental regulation.
AI’s positive effect on energy justice is mediated by reduced industrial density.
Mediation/pathway analysis reported in the paper identifying reductions in industrial density as a mechanism.
AI’s positive effect on energy justice is mediated by higher energy prices.
Reported mediation/pathway results indicating higher energy prices are a channel for AI’s impact on the energy justice index.
AI’s positive effect on energy justice is mediated by green innovation.
Mediation/pathway analysis in the paper identifies green innovation as a mechanism through which AI affects energy justice.
AI’s positive effect on energy justice is mediated by improved energy efficiency.
Mediation/pathway analysis reported in paper identifying energy efficiency as one mechanism linking AI adoption to energy justice improvements.
AI adoption significantly enhances overall energy justice.
Panel regression analysis using the constructed energy justice index as outcome; significance reported in findings (based on the stated empirical results across 30 provinces, 2008–2022).
Given these findings, policymakers should favor 'strategic forbearance'—apply existing laws rather than create new regulations that could stifle innovation and diffusion of AI.
Authors' normative policy recommendation based on their interpretation of the reviewed empirical literature (risk–benefit assessment); this is a prescriptive conclusion rather than an empirical finding, so no sample size applies.
Generative AI lowers entry costs for startups, facilitating new firm entry and product development.
Cited empirical and descriptive evidence in the literature review indicating reduced development costs and faster product prototyping enabled by AI tools; the brief does not provide a pooled sample size or a single quantitative estimate.
Generative AI significantly boosts productivity in specific tasks like coding, writing, and customer service—often by 15% to 50%.
Synthesis/review of empirical literature through 2025 (multiple empirical studies of task-level impacts, including field and lab studies and observational analyses); the brief reports aggregate reported effect ranges but does not list a single pooled sample size.
The study contributes to theory by empirically integrating technological, human, and institutional dimensions within a single architectural framework, moving beyond isolated analyses of digital credit.
Author-stated contribution based on combining measures of algorithmic credit systems, human capability, and institutional design and testing interactions in the same regression models.
Moderation analysis reveals that higher levels of human capability and stronger institutional design amplify the positive effects of algorithmic credit systems and mitigate their adverse effects (i.e., they strengthen repayment and resilience effects and reduce financial stress).
Reported moderation analyses using interaction terms in the regression models on the 400-user cross-sectional sample; results described as significant moderation by human capability and institutional design.
Algorithmic credit systems are positively associated with financial resilience.
Regression analyses reported show a positive relationship between algorithmic credit system use and measures of financial resilience in the sample of 400 users.
Algorithmic credit systems are positively associated with repayment behavior.
Multiple regression results reported in the study indicate a positive association between use of algorithmic credit systems and repayment behavior based on cross-sectional survey of 400 users.
Measurement reliability and validity were established through Cronbach's alpha and principal component analysis.
Paper states that Cronbach’s alpha and principal component analysis (PCA) were used to establish measurement reliability and validity.
The study used a quantitative, explanatory, cross-sectional design and employed multiple regression and moderation analyses to assess relationships among algorithmic credit systems, human capability, institutional design, and financial-wellbeing outcomes.
Methods described explicitly: quantitative explanatory cross-sectional design; analytical methods named as multiple regression and moderation analyses.
Data were collected from 400 users of algorithmic and digitally mediated credit platforms.
Study reports a quantitative, explanatory, cross-sectional survey of users; sample size explicitly stated as 400.
The code and data used in the study are publicly available at the referenced repository.
Paper statement that code and data are publicly available at a repository (link provided in paper).
A sensitivity analysis over patrol radius, officer count, and citizen reporting probability reveals outcomes are most sensitive to officer deployment levels.
Reported sensitivity analysis across patrol radius, officer count, and reporting probability showing officer count as the most influential parameter in the simulation outcomes.
Persistent Gini coefficients of 0.43 to 0.62 across all conditions indicate concentrated detection inequality.
Reported range of Gini coefficients from simulation experiments across conditions.
Experiments reveal extreme and year-variant bias in Baltimore's detected mode, with mean annual DIR up to 15,714 in 2019.
Reported experimental result from simulations on Baltimore data giving mean annual DIR up to 15,714 for 2019.
We compute four monthly bias metrics across 264 city-year-mode observations: the Disparate Impact Ratio (DIR), Demographic Parity Gap, Gini Coefficient, and a composite Bias Amplification Score.
Statement of metrics computed and the number of observations (264 city-year-mode observations) reported in the paper.
The study uses 145,000+ Part 1 crime records from Baltimore (2017–2019) and 233,000+ records from Chicago (2022), augmented with US Census ACS demographic data.
Reported dataset sizes and data sources in the paper (crime records from Baltimore and Chicago; ACS demographic augmentation).
We present a reproducible simulation framework that couples a Generative Adversarial Network (GAN) with a Noisy OR patrol detection model to measure how racial bias propagates through the full enforcement pipeline from crime occurrence to police contact.
Description of methods in paper: coupling a GAN (CTGAN) for synthetic crime generation with a Noisy OR detection/patrol model; method-level claim rather than a numerical result.
The initially selected candidates determine both the benchmark of success and the direction of improvement.
Theoretical result asserted by the authors based on analysis of the closed-loop system (paper's analytical finding).
Rejected individuals exert effort to improve actionable features along directions implied by the decision rule.
Model assumption and dynamic behavior encoded in the proposed framework (assumption/behavioral mechanism in the model).
Immediate practical steps include improved documentation, stakeholder audits, and multi‑metric evaluation; medium‑term steps include standards for participatory evaluation and tooling for transparency and monitoring; long‑term steps include institutional governance, interoperable safety APIs, and public‑interest evaluation infrastructure.
Prescriptive roadmap in the paper based on conceptual analysis and prior literature; these are recommended policy/program milestones rather than empirically validated interventions.