AI tools in hiring and workforce management produce ambivalent labour-market outcomes: they can widen access when used as assistive matchmakers but often deepen exclusion and bias when trained on flawed data or deployed without transparency and accountability. The review argues that technical audits, inclusive upskilling, participatory regulation, and responsible HR practices must work together to align AI with decent, inclusive work.
Artificial Intelligence (AI) is increasingly used in recruitment, performance management, and algorithmic work management, with potentially divergent implications for worker inclusion, exclusion, and discrimination. This systematic review synthesizes peer-reviewed evidence on (i) which AI applications in labor-market settings are linked to inclusion/exclusion outcomes, (ii) the mechanisms and contextual moderators shaping these effects, and (iii) governance and human-resource management responses proposed in the literature. Guided by PRISMA 2020, we searched Scopus and Web of Science (Title/Abstract/Keywords) for English-language journal articles published between 2015 and 2025. Nineteen studies met the eligibility criteria and were analyzed using qualitative thematic synthesis. The evidence indicates an ambivalent pattern: AI can support inclusion through assistive technologies and improved matching, but it can also exacerbate occupational polarization, digital exclusion, and discriminatory outcomes when models are trained on biased data or deployed without transparency and accountability. Outcomes depend on complementary organizational practices, workers’ access to skills, and the regulatory environment. Based on an evidence map of the included studies, we propose a hybrid governance model combining technical and organizational audits, inclusive upskilling/reskilling, participatory regulation, and responsible HR policies to align AI innovation with decent and inclusive work. Given the focused Title/Abstract/Keywords query and the small, heterogeneous corpus, the findings are interpreted as a scoped evidence map rather than an exhaustive census of all AI-and-work research. The model’s contribution lies in integrating four interdependent governance layers—technical, organizational, workforce, and regulatory—within a single labor-market framework. Accordingly, the review should be read as a focused qualitative evidence synthesis, and the proposed model as an evidence-informed conceptual framework that warrants future empirical validation.
Summary
Main Finding
AI in recruitment, performance management, and algorithmic work management has ambivalent effects on inclusion: it can enhance inclusion through assistive technologies and better matching, but it can also deepen occupational polarization, digital exclusion, and discrimination when models are trained on biased data or deployed without transparency, accountability, and complementary organizational supports. Outcomes hinge on organizational practices, workers’ skills/access, and the regulatory context. The authors synthesize 19 studies and propose a four-layer hybrid governance model (technical, organizational, workforce, regulatory) as an evidence-informed framework for aligning AI with decent, inclusive work.
Key Points
- Evidence base: 19 peer-reviewed studies (English, 2015–2025) identified via a PRISMA-guided search of Scopus and Web of Science (title/abstract/keywords). Analysis used qualitative thematic synthesis and an evidence map.
- Ambivalent effects:
- Inclusion benefits: assistive technologies for disabled workers, improved matching between workers and jobs, automation of mundane tasks that can raise accessibility.
- Exclusion risks: occupational polarization (routine tasks automated, high/low-skill bifurcation), digital exclusion (lack of access/skills), and discriminatory outcomes from biased training data and opaque deployment.
- Mechanisms and moderators:
- Technical: training-data bias, model opacity, lack of fairness-aware design.
- Organizational: HR policies, transparency, accountability, auditing practices, management incentives.
- Worker-level: access to upskilling/reskilling, digital literacy, bargaining power.
- Regulatory/contextual: labor law, anti-discrimination enforcement, industry norms.
- Governance and HR responses identified in the literature:
- Technical audits (algorithmic, fairness, performance) and model documentation.
- Organizational audits and accountability mechanisms linking AI to HR practices.
- Inclusive upskilling/reskilling programs tied to AI adoption.
- Participatory regulation and stakeholder engagement (workers, unions).
- Responsible HR policies: transparency, explainability, appeal/recourse mechanisms.
- Limitations noted by authors: focused search strategy (title/abstract/keywords), English-language restriction, small and heterogeneous corpus — so the review is a scoped evidence map, not an exhaustive census. The proposed governance model is conceptual and needs empirical validation.
Data & Methods
- Search strategy: PRISMA 2020-guided systematic search of Scopus and Web of Science for English-language journal articles published 2015–2025, queried in Title/Abstract/Keywords.
- Inclusion: peer-reviewed empirical/theoretical studies addressing AI applications in labor-market settings and inclusion/exclusion outcomes.
- Corpus: 19 eligible studies meeting criteria.
- Analysis: qualitative thematic synthesis to extract mechanisms, moderators, outcomes, governance responses; synthesis organized into an evidence map and used to develop a hybrid governance framework.
- Interpretation: findings presented as a scoped qualitative evidence synthesis (evidence map), with caveats about limited coverage and heterogeneity.
Implications for AI Economics
- For models of labor markets and automation:
- Incorporate heterogeneity in worker access to skills and digital tools; treat skill formation and firm-level complementary practices as key mediators.
- Model task-based substitution and complementarity with explicit attention to polarization and unequal exposure across occupations.
- Include institutional and regulatory constraints (audit requirements, anti-discrimination enforcement) as factors shaping firm adoption and outcomes.
- For empirical research:
- Prioritize causal and microdata studies measuring distributional effects (wages, employment, task composition) of AI-enabled HR and management systems.
- Evaluate the effectiveness and cost-benefit of governance interventions (technical audits, upskilling programs, participatory regulation).
- Broaden evidence by including gray literature, industry reports, cross-country comparisons, and randomized/field experiments to quantify net employment and inequality effects.
- For policy and regulation:
- Design multi-layered interventions: require technical audits and documentation, mandate organizational accountability and worker recourse, subsidize inclusive upskilling, and create participatory rulemaking channels (e.g., worker representation in AI governance).
- Target policies to reduce digital exclusion (access and training) to prevent unequal gains from AI adoption.
- Monitor distributional outcomes (who gains, who loses) rather than only aggregate productivity measures.
- For firms and HR:
- Treat AI adoption as contingent on organizational practices; pair algorithmic tools with transparent HR policies, explainability, appeal processes, and proactive reskilling.
- Assess AI tools not only for predictive performance but for fairness, disparate impacts, and effects on workforce composition.
Overall takeaway for AI economics: assessing AI’s labor-market consequences requires integrating technical, firm-level, worker-level, and regulatory dimensions. Policy and modeling approaches that ignore these complementarities risk underestimating distributional harms or overstating inclusionary benefits.
Assessment
Claims (9)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| We conducted a systematic review guided by PRISMA 2020, searching Scopus and Web of Science (Title/Abstract/Keywords) for English-language journal articles published between 2015 and 2025. Other | null_result | high | N/A (method description) |
0.4
|
| Nineteen studies met the eligibility criteria and were analyzed using qualitative thematic synthesis. Other | null_result | high | number of included studies |
n=19
0.4
|
| The evidence indicates that AI can support inclusion through assistive technologies and improved matching in labor-market settings. Hiring | positive | high | worker inclusion in recruitment/placement |
n=19
0.24
|
| AI can exacerbate occupational polarization, digital exclusion, and discriminatory outcomes when models are trained on biased data or deployed without transparency and accountability. Inequality | negative | high | distributional and equity outcomes (polarization, exclusion, discrimination) |
n=19
0.24
|
| Outcomes of AI deployment in labor-market settings depend on complementary organizational practices, workers’ access to skills, and the regulatory environment. Skill Acquisition | mixed | high | inclusion/exclusion outcomes contingent on moderators |
n=19
0.24
|
| Based on an evidence map of the included studies, we propose a hybrid governance model combining technical and organizational audits, inclusive upskilling/reskilling, participatory regulation, and responsible HR policies to align AI innovation with decent and inclusive work. Governance And Regulation | positive | high | alignment of AI innovation with decent and inclusive work (governance intervention) |
n=19
0.04
|
| Given the focused Title/Abstract/Keywords query and the small, heterogeneous corpus, the findings are interpreted as a scoped evidence map rather than an exhaustive census of all AI-and-work research. Other | null_result | high | scope and generalizability of the review findings |
n=19
0.4
|
| The model's contribution lies in integrating four interdependent governance layers—technical, organizational, workforce, and regulatory—within a single labor-market framework. Governance And Regulation | positive | high | conceptual integration of governance layers |
0.04
|
| The review is a focused qualitative evidence synthesis and the proposed governance model is an evidence-informed conceptual framework that warrants future empirical validation. Research Productivity | null_result | high | need for future empirical validation |
n=19
0.04
|