Women’s ethical concerns are slowing generative AI adoption
While the study documents persistent gender differences in AI use, it also provides evidence that these patterns are not fixed. By tracking individuals across two survey waves using a synthetic twin design, the researchers examined how changes in skills and attitudes translate into changes in behavior over time.
New research shows that women are adopting generative AI tools at significantly lower rates than men, and the gap is rooted not in technical skill, but in how the societal risks of AI are perceived.
The study, titled “Women Worry, Men Adopt: How Gendered Perceptions Shape the Use of Generative AI,” draws on nationally representative UK survey data collected in 2023 and 2024. The research finds that women’s heightened concerns about AI’s broader social, ethical, and environmental consequences play a decisive role in shaping adoption patterns, with implications for future inequality in an AI-driven economy
Risk perception emerges as the key driver of unequal AI use
Across the UK population, men are more likely than women to report frequent use of generative AI tools, defined in the study as using such tools at least once a week. While the overall difference may appear modest at first glance, the gap widens sharply when societal concerns about AI are taken into account. Women who express worries about AI’s impact on mental health, privacy, climate sustainability, or employment are substantially less likely to use generative AI than men with similar concerns.
The researchers developed a composite index to capture these risk perceptions, combining concerns about psychological harm, environmental impact, data misuse, and labor market disruption. This index emerged as one of the strongest predictors of generative AI adoption for women across all age groups. Among younger women in particular, perceived societal risk was more influential than education level or self-reported digital literacy in explaining whether AI tools were used at all.
The findings challenge the dominant assumption that unequal technology adoption is primarily driven by deficits in skills, training, or confidence. Instead, they point to a more complex dynamic in which women are making deliberate choices shaped by ethical and social considerations. The study shows that women’s adoption rates decline most sharply in response to concerns about mental health effects and data privacy, while men’s usage remains comparatively stable under the same conditions.
These patterns become even more pronounced in intersectional analyses. The largest gender gaps in personal AI use are observed among younger, digitally fluent individuals who also hold strong societal risk concerns. In these groups, differences in adoption exceed 45 percentage points. This means that the people most capable of using generative AI tools are sometimes the least likely to do so if they perceive the technology as socially harmful, and this effect is far stronger for women.
At work, the gender gap is generally smaller than in personal contexts, suggesting that organizational expectations and professional incentives can partially offset personal hesitation. However, even in workplace settings, risk perceptions continue to shape adoption, especially where privacy or mental health concerns are salient. The results indicate that professional environments do not fully neutralize value-driven resistance to AI use.
Digital skills alone fail to close the gender divide
To assess whether traditional predictors of technology adoption could account for the observed gaps, the study compared the influence of risk perceptions with factors such as education, occupation, and digital literacy. Using gender-specific models across different age groups, the researchers found that risk perception consistently explained a substantial share of variation in AI use for women, often surpassing the explanatory power of skills-based measures.
Among young adults, risk perception ranked as one of the top predictors of AI adoption for women but played a much smaller role for men. In middle-aged and older groups, societal concerns remained highly influential for both genders but continued to carry more weight for women. Across all models, perceptions of AI-related risk accounted for between 9 and 18 percent of the total explanatory importance, a level comparable to or greater than that of education and occupation.
These findings carry important implications for policy and workforce development strategies. Many current initiatives aimed at promoting AI adoption focus on improving digital literacy, offering technical training, or expanding access to tools. While such interventions increase overall adoption, the study shows that they can unintentionally widen the gender gap, particularly among younger users. When digital skills improve, men’s AI use tends to rise more sharply than women’s, reinforcing existing disparities.
The research suggests that women’s lower adoption rates should not be interpreted as a lack of confidence or competence. Instead, they reflect a pattern of care-driven caution, rooted in concern for broader social consequences rather than personal risk. This distinction matters because it reframes underuse not as a deficit to be corrected, but as a signal of legitimate ethical engagement with emerging technology.
Importantly, the study highlights that women’s concerns are not unfounded. Generative AI systems are associated with significant energy consumption, unresolved data governance issues, labor displacement risks, and well-documented problems related to bias and misinformation. The findings imply that women’s hesitation may serve as an early warning indicator of systemic weaknesses in AI development and deployment.
Shifting attitudes can narrow the adoption gap
While the study documents persistent gender differences in AI use, it also provides evidence that these patterns are not fixed. By tracking individuals across two survey waves using a synthetic twin design, the researchers examined how changes in skills and attitudes translate into changes in behavior over time.
The analysis compared individuals with similar demographic and occupational profiles who experienced different shifts between 2023 and 2024. Two types of change were examined: improvements in digital literacy and increased optimism about AI’s societal impact. The results reveal a clear contrast between the two pathways.
Gains in digital literacy led to higher AI adoption for both men and women but tended to widen the gender gap, especially among younger adults. By contrast, increased optimism about AI’s social effects produced a markedly different outcome. When women became more positive about AI’s broader impact, their adoption rates rose sharply, often more than those of men, leading to a narrowing of the gender divide.
Among young women, a shift toward greater societal optimism was associated with a substantial increase in personal AI use, while the corresponding increase for men was smaller. This pattern suggests that attitudes about AI’s ethical and social role are particularly influential for women’s engagement with the technology.
The findings point to a critical policy insight: addressing concerns about AI’s societal consequences may be more effective at promoting equitable adoption than focusing narrowly on technical skills. Transparency around data use, stronger safeguards for mental well-being, clearer accountability mechanisms, and credible action on environmental impact could reduce hesitation without pressuring individuals to compromise their values.
The study also raises broader questions about how early adoption patterns shape long-term inequality. If men disproportionately adopt generative AI during a formative period when norms, skills, and expectations are being established, these early advantages may compound over time. This could influence productivity, visibility, and career advancement in ways that mirror earlier digital divides.
By identifying societal risk perception as a central driver of gendered AI adoption, the research reframes the debate around inclusion in the AI economy. Closing the gap, the authors argue, is not simply a matter of teaching more people how to use the tools. It requires addressing what the tools are used for, how they are built, and whose concerns are taken seriously in shaping their future.
- FIRST PUBLISHED IN:
- Devdiscourse

