Higher education, gender‑balanced leadership and digital literacy raise trust in ML credit scoring and increase FinTech adoption in the surveyed developing‑country sample; but perceived algorithmic unfairness and lack of transparency continue to exclude marginalized groups.
The present paper is research on whether machine learning-supported FinTech innovations can be used to promote financial inclusion where access to credit is fair and reasonable to everyone in the emerging economies. It is also directly related to the issue of algorithmic bias in automated credit score systems that may block marginalized groups of individuals from accessing financial services (Agboola, 2025; Nwafor, Nwafor, and Brahma, 2024; Oguntibeju, 2024). It was a quantitative research design, and structured questionnaires were sent to 400 respondents both in the city and rural areas in the developing countries (Kothandapani, 2022; Sadok, Sakka, and El Maknouzi, 2022; Herrmann and Masawi, 2022). The main constructs that identified the adoption of FinTech and perceived algorithmic trust were identified through the PCA. Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM) were used to verify the correlation of variables such as educational background, gender inclusiveness, digital literacy, and perceived algorithmic fairness (Dumitrescu et al., 2022; Chen, Calabrese, and Martin-Barraga, 2024; Moscato, Picariello, and Sperli, 2021). It was possible due to Composite Reliability (CR), Average Variance Extraction (AVE), and several model-fit indicators (Bari, 2024; Fuster et al., 2021; Khandani, Kim, and Lo, 2010). We show that the level of education and gender-balanced leadership melting can have a positive impact on the creation of trust and acceptance towards ML-based credit systems, and there are inequalities in the emergence of algorithmic bias and the lack of transparency (Memarian, 2023; Bello, 2023; Gambacorta et al., 2024). It was also noted that perceived unfairness with the algorithms could be mediated by relying on the digital literacy and education that proved to be of the utmost importance in assisting in integrating inclusive finance (Zahir, Tonmoy, and Md Arifur, 2023; Abdullah Al et al., 2022; Md Masud, 2022). These results support the fact that these variables are mutually dependent and that the use of AI and inclusive policies is necessary to promote the sustainable realization of financial inclusion (Salami et al., 2025; Berg et al., 2019; Fuster et al., 2021). Regulatory interventions to support the creation of digital literacy, gender equality, and data algorithm responsibility should be coupled with technological innovation, which is not an inclusive development guarantee, but rather a supplement (Jagtiani and Lemieux, 2019; Herrmann and Masawi, 2022; Agboola, 2025).
Summary
Main Finding
Machine-learning driven FinTech can expand credit access in developing economies, but it does not guarantee fairness by itself. Perceptions of algorithmic fairness and actual inclusion improve when ML systems are combined with (1) higher digital literacy and education, (2) gender-balanced leadership and inclusion efforts, and (3) regulatory interventions that promote transparency, accountability, and data standards. Without these social, institutional, and policy complements, ML-based credit scoring risks reproducing or amplifying existing socioeconomic and gendered exclusions.
Key Points
- Algorithmic bias in credit scoring arises primarily from data asymmetries (sparse/poor-quality digital histories, language heterogeneity, unbalanced datasets) and opaque model design.
- Survey evidence indicates education and digital literacy mediate perceived unfairness and increase trust and uptake of ML-based credit systems.
- Gender inclusiveness in governance/leadership correlates with greater perceived algorithmic trust and acceptance.
- Fairness-aware ML techniques and fairness metrics (e.g., demographic parity, equalized odds, counterfactual fairness) are discussed as benchmarks, but few empirical studies apply them to real FinTech data in developing-country contexts.
- Regulatory and institutional gaps (fragmented data systems, lack of interoperability, weak accountability) are key drivers of exclusion and limit the ability of policymakers to detect and correct discriminatory outcomes.
- FinTech innovation is necessary but insufficient: technology must be coupled with policy measures (digital literacy programs, gender equity initiatives, algorithmic responsibility standards) to produce inclusive outcomes.
Data & Methods
- Research design: quantitative, survey-based study.
- Sample: structured questionnaires administered to 400 respondents drawn from urban and rural areas in developing-country contexts.
- Primary constructs: adoption of FinTech, perceived algorithmic trust/fairness, education, gender inclusiveness, digital literacy.
- Statistical methods:
- Principal Component Analysis (PCA) to identify main constructs.
- Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM) to examine relationships among constructs (education, gender inclusiveness, digital literacy, perceived algorithmic fairness).
- Reliability and validity checks: Composite Reliability (CR), Average Variance Extracted (AVE), and standard model-fit indicators.
- Important methodological limits (reported or implied):
- The analysis is perception- and survey-based rather than an algorithmic audit of deployed credit models; it does not present performance/fairness metrics from actual ML systems or field experiments.
- Sample size and sampling frame (400 respondents) may limit external generalizability across diverse developing-country settings.
- Causal claims are constrained by cross-sectional survey design.
Implications for AI Economics
- Distributional consequences: Algorithmic credit scoring can reshape credit allocation and credit pricing; biased models risk concentrating exclusion among low-income, rural, and female borrowers, worsening inequality and undermining SDGs (notably SDG 5, 8 and 10).
- Market efficiency vs. equity trade-offs: Pursuing highest predictive accuracy without fairness constraints can improve lender risk estimates but produce socially undesirable segmentation and adverse socioeconomic impacts. Policymakers must balance efficiency gains with equity and access goals.
- Regulatory economics: There is a role for regulatory interventions (mandatory fairness audits, transparency/interpretability requirements, data governance standards, interoperability mandates). Properly designed regulation can correct market failures (information asymmetry, externalities from biased algorithms) and increase trust, expanding market participation.
- Investment priorities: Public and private investment in digital literacy, education, and gender-equity programs can yield economic returns by improving take-up and correct use of digital financial services; such investments also reduce the effective bias introduced by sparse/low-quality data.
- Institutional design: Promoting gender-balanced leadership and inclusive governance in FinTech firms and supervisory bodies can change incentives and product designs toward fairer outcomes, affecting the supply side of credit markets.
- Research & measurement needs: AI economics should push for field-level evaluations combining algorithmic audits with outcome-level measures (default rates, access rates, welfare impacts) in developing-country contexts. Policymakers and researchers need standardized datasets and evaluation protocols to measure both predictive performance and fairness across subpopulations.
- Practical policy recommendations implied by the study:
- Require fairness audits and disclosure of fairness metrics for deployed credit models.
- Fund digital literacy and financial education campaigns targeted at marginalized groups.
- Encourage data interoperability and standards to reduce fragmentation and improve model representativeness.
- Incentivize gender diversity in fintech governance and product design.
- Support development and adoption of fairness-aware ML methods tailored to low-data and multilingual environments.
Overall, the paper underscores that the economic impact of AI in finance depends critically on institutional and human-capital complements; without them, algorithmic adoption risks reinforcing pre-existing inequalities rather than delivering inclusive growth.
Assessment
Claims (11)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| Machine learning-supported FinTech innovations can be used to promote financial inclusion (making access to credit fair and reasonable for everyone in emerging economies). Consumer Welfare | positive | high | financial inclusion / access to credit |
n=400
0.3
|
| Algorithmic bias in automated credit scoring systems may block marginalized groups from accessing financial services. Consumer Welfare | negative | high | access to credit for marginalized groups |
0.3
|
| Structured questionnaires were administered to 400 respondents in both city and rural areas of developing countries. Other | null_result | high | survey responses (study data collection) |
n=400
0.5
|
| Principal Component Analysis (PCA) identified the main constructs related to adoption of FinTech and perceived algorithmic trust. Adoption Rate | null_result | high | construct validity for adoption and perceived algorithmic trust |
n=400
0.3
|
| Confirmatory Factor Analysis (CFA) and Structural Equation Modeling (SEM) verified correlations among educational background, gender inclusiveness, digital literacy, and perceived algorithmic fairness. Adoption Rate | mixed | high | correlations among educational background, gender inclusiveness, digital literacy, perceived algorithmic fairness |
n=400
0.3
|
| Higher level of education and gender-balanced leadership positively impact trust and acceptance toward ML-based credit systems. Adoption Rate | positive | high | trust and acceptance of ML-based credit systems |
n=400
0.3
|
| There exist inequalities in the emergence of algorithmic bias and in transparency of these systems. Inequality | negative | high | inequalities related to algorithmic bias and transparency |
n=400
0.3
|
| Perceived unfairness of algorithms can be mediated (reduced) by digital literacy and education, which assist integration of inclusive finance. Adoption Rate | positive | high | perceived algorithmic unfairness (mediated by digital literacy/education) |
n=400
0.3
|
| The study's measurement model is supported by Composite Reliability (CR), Average Variance Extracted (AVE), and several model-fit indicators. Other | null_result | high | measurement model reliability and validity |
n=400
0.5
|
| These variables (education, gender inclusiveness, digital literacy, perceived fairness) are mutually dependent and the use of AI combined with inclusive policies is necessary to sustainably realize financial inclusion. Consumer Welfare | positive | medium | sustainable financial inclusion |
n=400
0.03
|
| Regulatory interventions to promote digital literacy, gender equality, and algorithmic responsibility should be coupled with technological innovation because technology alone does not guarantee inclusive development. Governance And Regulation | positive | high | effectiveness of policy + technology for inclusive development |
0.05
|