The Commonplace
Home Dashboard Papers Evidence Digests 🎲
← Papers

Women in the UK lag men in generative-AI adoption largely because they worry more about societal risks—not because of access or skills; boosting optimism about AI could raise young women's GenAI use from about 13% to 33%, markedly narrowing the gender gap.

Women Worry, Men Adopt: How Gendered Perceptions Shape the Use of Generative AI
Fabian Stephany, Jedrzej Duszynski · Fetched March 15, 2026 · arXiv.org
semantic_scholar correlational medium evidence 8/10 relevance DOI Source
In the UK, women adopt generative AI far less than men primarily because they report higher societal-risk concerns (mental health, privacy, climate, labor disruption), and changing those perceptions could substantially increase young women's uptake.

Generative artificial intelligence (GenAI) is diffusing rapidly, yet its adoption is strikingly unequal. Using nationally representative UK survey data from 2023 to 2024, we show that women adopt GenAI substantially less often than men because they perceive its societal risks differently. We construct a composite index capturing concerns about mental health, privacy, climate impact, and labor market disruption. This index explains between 9 and 18 percent of the variation in GenAI adoption and ranks among the strongest predictors for women across all age groups, surpassing digital literacy and education for young women. Intersectional analyses show that the largest disparities arise among younger, digitally fluent individuals with high societal risk concerns, where gender gaps in personal use exceed 45 percentage points. Using a synthetic twin panel design, we show that increased optimism about AI's societal impact raises GenAI use among young women from 13 percent to 33 percent, substantially narrowing the gender divide. These findings indicate that gendered perceptions of AI's social and ethical consequences, rather than access or capability, are the primary drivers of unequal GenAI adoption, with implications for productivity, skill formation, and economic inequality in an AI enabled economy.

Summary

Main Finding

Women adopt generative AI (GenAI) substantially less than men in the UK, and this gap is driven primarily by gendered perceptions of GenAI’s societal risks (mental health, privacy, climate impact, labor-market disruption) rather than by access or capability. A composite index of those concerns explains a meaningful share of variation in adoption and is among the strongest predictors of women’s GenAI use; shifting perceptions raises young women’s use markedly and narrows the gender divide.

Key Points

  • Source: nationally representative UK survey data, 2023–2024.
  • Core mechanism: a composite “societal risk concerns” index (mental health, privacy, climate, labor disruption) predicts lower GenAI adoption among women.
  • Quantitative importance:
    • The risk-concerns index explains between 9% and 18% of the variation in GenAI adoption.
    • It ranks among the strongest predictors of adoption for women across all ages and outperforms digital literacy and education for young women.
    • Intersectional patterns: largest gender gaps occur among younger, digitally fluent respondents with high societal-risk concerns — gender differences in personal GenAI use exceed 45 percentage points in these subgroups.
    • Counterfactual/synthetic analysis: raising optimism about AI’s societal impact increases GenAI use among young women from 13% to 33%, substantially narrowing the gender gap.
  • Interpretation: differences in social/ethical perceptions (risk aversion, trust, value judgments) are the primary drivers of unequal GenAI uptake, not differential access or digital skills.

Data & Methods

  • Data: nationally representative UK survey waves collected in 2023–2024 that measure GenAI awareness and personal use, demographic and socioeconomic covariates, digital skills, and attitudes about AI’s societal impacts.
  • Key variable: composite index of societal risk concerns combining items on mental-health risks, privacy concerns, environmental/climate impact, and fears about labor-market disruption.
  • Empirical approach:
    • Multivariate regression analyses to assess predictors of GenAI adoption, controlling for demographics, education, digital literacy, occupation, and access.
    • Predictor-ranking to compare the explanatory power of the risk index versus other covariates (e.g., digital literacy, education).
    • Intersectional subgroup analyses by age and digital fluency to reveal heterogeneous gender gaps.
    • Synthetic twin panel design: construct matched counterfactual “twins” (synthetic panel) to simulate how changes in AI-related optimism would affect adoption — used to estimate increase in young women’s use from 13% to 33% under increased optimism.
  • Robustness: results hold across age groups and after accounting for access and capability measures (suggesting perceptions, not constraints, drive the gap).

Implications for AI Economics

  • Adoption and productivity: gendered perception gaps may produce unequal adoption of productivity-enhancing GenAI tools, potentially reducing aggregate productivity gains and creating gendered productivity differentials within firms and occupations.
  • Skills and human capital formation: lower early adoption among women — especially young, digitally fluent women — could alter skill accumulation, career trajectories, and comparative advantage as AI reshapes tasks and required skills.
  • Labor-market inequality: differential uptake driven by attitudes (not access) risks widening existing gender wage and employment gaps as AI complements some tasks and skills.
  • Policy and firm intervention levers:
    • Address perceptions directly (communication, education, demonstrations) rather than focusing solely on access or digital skills training.
    • Design outreach and interventions targeted at subgroups with high societal-risk concerns (young women, digitally fluent) to reduce misperceptions and build trust.
    • Improve governance, transparency, and safety assurances (privacy protections, ethical AI practices) to lower perceived societal risks and encourage broader, more inclusive adoption.
  • Research and evaluation: interventions that change beliefs about AI’s societal impacts (information treatments, institutional safeguards, product design) merit experimental testing to assess causal effects on adoption and downstream economic outcomes.

Assessment

Paper Typecorrelational Evidence Strengthmedium — Uses nationally representative survey data, rich controls, robustness checks, and a transparent predictor-ranking approach that consistently points to perceptions as a strong correlate of lower female GenAI uptake; however, the design is observational, so causal claims about perceptions driving adoption are vulnerable to unobserved confounding, reverse causality (use affecting attitudes), and measurement error in self-reported use and beliefs. Methods Rigormedium — Analysis leverages high-quality, representative data, constructs a coherent composite index of societal-risk concerns, performs subgroup and robustness checks, and implements a plausible matched synthetic-twin counterfactual; but it lacks experimental or quasi-experimental causal leverage (e.g., randomized information treatments or valid instruments), relies on cross-sectional self-reports, and may not fully rule out alternative explanations. SampleNationally representative UK adult survey waves collected in 2023–2024 measuring GenAI awareness and personal use, demographic and socioeconomic covariates, digital skills and access, occupation, and attitudinal items about AI's societal impacts (mental health, privacy, climate, labor-market disruption). Themesadoption inequality labor_markets skills_training productivity IdentificationObservational analysis using multivariate regression controlling for demographics, education, digital literacy, occupation, and access; predictor-ranking to compare explanatory power of a composite societal-risk concerns index versus other covariates; intersectional subgroup analysis; and a matched synthetic-twin (counterfactual) design that simulates how changing AI-related optimism would alter adoption—no randomized treatment or instrumental variables for causal identification. GeneralizabilityCountry-specific (UK) — cultural, regulatory, and media contexts may limit extrapolation to other countries, Cross-sectional self-reported measures — may not reflect actual usage or future adoption trajectories, Evolving GenAI definitions and products — findings tied to the state of GenAI and public discourse in 2023–24, Potential cohort effects — younger cohorts' attitudes and adoption may change rapidly over time, Observational design limits causal generalization to downstream economic outcomes (productivity, wages)

Claims (10)

ClaimDirectionConfidenceOutcomeDetails
Generative artificial intelligence (GenAI) adoption is diffusing rapidly but its adoption is strikingly unequal. Adoption Rate mixed medium GenAI adoption rates (overall and by demographic groups)
0.18
Women adopt GenAI substantially less often than men. Adoption Rate negative medium Personal use / adoption of GenAI (female vs male rates)
0.18
Women adopt GenAI less often than men because they perceive its societal risks differently. Adoption Rate negative medium GenAI adoption (mediated by societal-risk concern index)
0.18
A composite index capturing concerns about mental health, privacy, climate impact, and labor market disruption was constructed to measure societal risk perceptions of AI. Ai Safety And Ethics null_result high Societal risk concerns index (constructed measure)
0.3
The societal-risk concerns index explains between 9 and 18 percent of the variation in GenAI adoption. Adoption Rate negative medium Explained variation in GenAI adoption (percent variance attributable to the index)
Explains 918% of variation in adoption
0.18
The societal-risk concerns index ranks among the strongest predictors of GenAI adoption for women across all age groups, surpassing digital literacy and education for young women. Adoption Rate negative medium Predictive strength for GenAI adoption (relative importance of predictors for women and young women)
0.18
Intersectional analyses show the largest gender disparities in GenAI use arise among younger, digitally fluent individuals with high societal risk concerns, where gender gaps in personal use exceed 45 percentage points. Adoption Rate negative medium Gender gap in personal GenAI use (percentage-point difference) within younger, digitally fluent, high-concern subgroup
Gender gap > 45 percentage points in specified subgroup
0.18
Using a synthetic twin panel design, increased optimism about AI's societal impact raises GenAI use among young women from 13 percent to 33 percent, substantially narrowing the gender divide. Adoption Rate positive medium GenAI use rate among young women (change from 13% to 33% with increased optimism)
Increase from 13% to 33% (young women) with increased optimism
0.18
Gendered perceptions of AI's social and ethical consequences, rather than access or capability, are the primary drivers of unequal GenAI adoption. Adoption Rate negative medium Primary drivers of unequal GenAI adoption (relative contribution of perceptions vs access/capability)
0.18
Unequal GenAI adoption has implications for productivity, skill formation, and economic inequality in an AI-enabled economy. Inequality negative low Implied downstream outcomes: productivity, skill formation, economic inequality (speculative consequences)
0.09

Notes