Higher-education AI readiness is highly uneven: limited hands-on training, patchy institutional policies and poor infrastructure—particularly in the Global South—leave many students and educators underprepared for AI-enabled labor markets.
As artificial intelligence (AI) rapidly transforms the landscape of higher education, there is a critical need to develop AI competency among both educators and students. However, current AI policies and guidelines are often top-down and lack grassroots insights from key stakeholders. Drawing on the recently released UNESCO AI competency frameworks for educators and students (2024), this study presents findings from a global survey of over 600 students and educators. The results highlight significant disparities in AI engagement across groups, disciplines, and regions, as well as barriers such as inconsistent institutional guidance, limited access to hands-on training, and infrastructural constraints, particularly in Global South contexts. Drawing on these insights, the study offers practical, evidence-informed recommendations for higher education institutions, educators, and students to support equitable, sustainable, and context-sensitive AI competency development.
Summary
Main Finding
A global survey of more than 600 higher-education students and educators, anchored to the UNESCO 2024 AI competency frameworks, finds large disparities in AI engagement and preparedness across groups, disciplines, and regions. Key barriers are inconsistent institutional guidance, limited access to hands-on training, and infrastructural constraints—problems that are especially acute in Global South contexts. Addressing these gaps requires locally grounded, equity-focused investments in capacity building, infrastructure, and adaptive policy design.
Key Points
-
Disparities in engagement and confidence
- Levels of familiarity and use of AI tools vary widely by role (students vs. educators), discipline (STEM vs. humanities/social sciences), and region (Global North vs. Global South).
- Educators frequently report lower confidence in teaching AI-relevant skills than students report in using AI tools, reducing instructional capacity.
-
Institutional guidance and policy gaps
- Many institutions lack clear, consistent, or context-sensitive policies for AI use in learning, assessment, and academic integrity.
- Top-down guidelines are common; grassroots input from educators and students is often missing, reducing policy relevance and uptake.
-
Hands-on training and curricular integration
- Respondents cite limited opportunities for applied, project-based learning with AI tools; theory-oriented coverage dominates where AI is present.
- Practical barriers (software access, datasets, lab time) limit experiential learning that builds competency.
-
Infrastructure and access constraints
- Infrastructural limitations (bandwidth, computing resources, licensing costs) disproportionately affect respondents in the Global South and smaller institutions.
- These constraints create unequal starting points that can amplify later disparities in labor-market preparedness.
-
Equity and context sensitivity
- One-size-fits-all competency approaches fail to account for local labor markets, pedagogical traditions, and resource realities.
- Respondents favor context-aware frameworks that prioritize core competencies while allowing flexible, discipline-specific adaptation.
-
Recommended institutional actions (evidence-informed)
- Co-design policies and curricula with educators and students to ensure relevance and buy-in.
- Prioritize hands-on, low-cost training (open-source tools, cloud credits, shared labs) and scaffolded learning pathways.
- Invest strategically in infrastructure and pooled resources, with targeted support for under-resourced regions and programs.
- Establish clear, transparent assessment and academic-integrity guidelines that balance enabling learning with preventing misuse.
- Monitor, evaluate, and iterate: collect local data on outcomes to refine competency programs.
Data & Methods
-
Sample and scope
- Cross-sectional online survey of over 600 participants, composed of higher-education students and educators from multiple world regions, including significant representation from both Global North and Global South contexts.
- Recruitment targeted a mix of disciplines and institution types to capture variation in AI engagement.
-
Survey design
- Questions mapped to the UNESCO 2024 AI competency frameworks for educators and students to assess: awareness, self-reported competency, access to tools and training, institutional policies, and perceived barriers.
- Both quantitative (Likert-scale and multiple-choice) and open-ended qualitative items captured experiential detail and practical suggestions.
-
Analysis
- Descriptive statistics to profile engagement and barriers across subgroups (role, discipline, region).
- Thematic coding of open responses to identify recurring concerns and grassroots recommendations.
- Comparative analysis highlighting contrasts between resource-rich and resource-constrained settings.
-
Limitations
- Non-probability sampling and self-reported measures limit claims about prevalence and causality.
- Cross-sectional design cannot capture dynamics of skill acquisition over time.
- Geographic representation may still under-sample some regions and institution types; further targeted sampling is recommended.
Implications for AI Economics
-
Human capital and returns to AI skills
- Heterogeneous access to AI training implies unequal human-capital accumulation, which can exacerbate income and opportunity gaps as AI-enabled tasks proliferate.
- Economists should measure how differences in hands-on experience versus theoretical exposure affect labor-market returns to AI-related skills.
-
Skill-biased technological change and inequality
- Differential institutional capacity to teach and provide tools suggests that AI adoption may be skill-biased in ways that align with existing geographic and disciplinary inequalities.
- Policy models of technological diffusion must incorporate educational infrastructure and competency-building as key frictions.
-
Complementarities and task reallocation
- The study underscores complementarities between domain expertise and AI competencies: effective use of AI in a field depends on both technical and domain-specific skills.
- Evaluations of productivity impacts should account for these complementarities and the time needed for learning.
-
Policy design and cost-effectiveness
- Investments in shared infrastructure (cloud access, open-source toolkits, regional training hubs) may yield high social returns by lowering entry costs and reducing inequality.
- Economic analysis should compare centralized vs. decentralized models of training provision, accounting for local constraints and scaling costs.
-
Research agenda and evaluation methods
- Randomized evaluations and quasi-experimental studies are needed to estimate causal effects of specific interventions (hands-on labs, instructor training, subsidies for compute) on competencies and earnings.
- Cross-country and within-country studies can quantify how infrastructure constraints mediate the relationship between AI education and economic outcomes.
-
Institutional incentives and market responses
- Universities’ incentives (rankings, funding, workload) shape adoption of AI curricula; economists should study how these incentives interact with labor-market demand to influence supply of AI-skilled graduates.
- Consideration of public-good properties (open educational resources, shared datasets) is important for policy to correct under-provision.
In sum, the study points to significant and policy-relevant heterogeneity in AI competency and access in higher education. For researchers in AI economics, this highlights the need to model and empirically estimate the distributional effects of AI education, the returns to different training modalities, and the most cost-effective ways to close capability gaps—especially for under-resourced regions.
Assessment
Claims (14)
| Claim | Direction | Confidence | Outcome | Details |
|---|---|---|---|---|
| The study conducted a cross-sectional online survey of more than 600 higher-education students and educators from multiple world regions. Research Productivity | positive | high | sample size and participant composition (number of respondents; roles: students and educators; regional representation) |
n=600
0.09
|
| There are large disparities in AI engagement and preparedness across roles (students vs. educators), academic disciplines, and world regions. Skill Acquisition | mixed | medium | AI engagement and preparedness (self-reported familiarity, use, awareness, and confidence) |
n=600
0.05
|
| Levels of familiarity and use of AI tools vary widely by role, discipline, and region. Skill Acquisition | mixed | medium | self-reported familiarity with and use of AI tools |
n=600
0.05
|
| Educators frequently report lower confidence in teaching AI-relevant skills than students report in using AI tools, reducing instructional capacity. Skill Acquisition | negative | medium | self-reported confidence in teaching AI-relevant skills (educators) vs confidence in using AI tools (students) |
n=600
0.05
|
| Many institutions lack clear, consistent, or context-sensitive policies for AI use in learning, assessment, and academic integrity. Governance And Regulation | negative | medium | presence, clarity, and context-sensitivity of institutional AI policies |
n=600
0.05
|
| Top-down AI guidance from institutions is common, while grassroots input from educators and students is often missing, which reduces policy relevance and uptake. Governance And Regulation | negative | low | degree of grassroots input or participatory design in institutional AI policy formation |
n=600
0.03
|
| Respondents cite limited opportunities for applied, project-based learning with AI tools; where AI appears in curricula, coverage is more theory-oriented than hands-on. Training Effectiveness | negative | medium | availability of applied/project-based AI learning opportunities versus theoretical coverage |
n=600
0.05
|
| Practical barriers—software access, available datasets, and lab time—limit experiential learning that builds AI competency. Training Effectiveness | negative | medium | reported practical barriers to experiential AI learning (software access, datasets, lab time) |
n=600
0.05
|
| Infrastructural limitations (bandwidth, computing resources, licensing costs) disproportionately affect respondents in the Global South and smaller institutions. Inequality | negative | medium | infrastructural access measures (bandwidth, compute resources, licensing affordability) by region and institution type |
n=600
0.05
|
| These infrastructural and access constraints create unequal starting points that can amplify later disparities in labor-market preparedness. Inequality | negative | low | implied labor-market preparedness (not directly measured in this study) |
0.03
|
| One-size-fits-all AI competency approaches fail to account for local labor markets, pedagogical traditions, and resource realities; respondents favor context-aware frameworks allowing discipline-specific adaptation. Training Effectiveness | negative | medium | respondent preferences for competency framework design and adaptability to local context |
n=600
0.05
|
| Respondents recommend co-designing policies and curricula with educators and students, prioritizing hands-on low-cost training (open-source tools, cloud credits, shared labs), and investing in pooled infrastructure with targeted support for under-resourced regions. Governance And Regulation | positive | speculative | recommended institutional actions (policy co-design, training modalities, infrastructure investment) as reported/preferred by respondents |
n=600
0.01
|
| Non-probability sampling and self-reported measures limit claims about prevalence and causality; cross-sectional design cannot capture dynamics of skill acquisition over time. Research Productivity | null_result | high | study design limitations affecting external validity and causal inference |
n=600
0.09
|
| The paper identifies gaps and recommends that economists conduct randomized evaluations and quasi-experimental studies to estimate causal effects of interventions (hands-on labs, instructor training, compute subsidies) on competencies and earnings. Research Productivity | positive | high | suggested future measurement targets: causal effects of specific interventions on competencies and earnings (not measured in current study) |
0.09
|