Female students show higher AI adoption rates, challenging traditional tech trends
New research suggests that who uses AI, and why, depends less on access and more on experience, expectations, and underlying behavioral factors. A recent study finds that while AI adoption among university students is nearly universal, the intensity and likelihood of use are driven by cognitive and demographic variables that institutions have yet to fully address.
Published in Education Sciences, the study titled “Do Gender, Experience, Age, and Expectations Influence the Use of AI? A Binary Logistic Regression Analysis Applied to Entrepreneurship Students” examines how prior experience, expectations, gender, and age shape AI adoption among 208 entrepreneurship students in Mexico.
Experience and expectations drive AI adoption more than access
The study reveals that prior experience with AI is the single strongest predictor of future use. Students who had more familiarity with AI tools were significantly more likely to continue using them, with each increase in experience sharply raising the probability of adoption.
This result aligns with established technology acceptance theories, particularly the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT), which emphasize perceived ease of use and performance benefits as central drivers of adoption. The research reinforces these frameworks by showing that familiarity reduces cognitive barriers and builds confidence, making AI tools more accessible in practice.
Equally important are expectations. Students who believed AI would improve efficiency, accuracy, or productivity were far more likely to adopt it. Positive expectations increased the likelihood of use by a substantial margin, confirming that perception of value is as critical as technical capability.
Institutions can't rely on passive exposure to AI tools. Instead, they must actively shape how students perceive AI by demonstrating its practical benefits and embedding it into real-world learning scenarios. Without this, adoption risks becoming superficial, limited to occasional or low-impact use.
Despite the strong influence of expectations, the study also highlights a broader challenge. Perceived risk and trust remain significant barriers in AI adoption, particularly in educational environments where concerns about accuracy, reliability, and ethical use persist. While not the central focus of the regression model, the theoretical framework underscores that trust can outweigh perceived usefulness in shaping attitudes toward AI.
This tension between perceived benefits and underlying skepticism explains why adoption, even when widespread, may not translate into deep or consistent use.
Gender and age show mixed but significant effects
The study also uncovers notable differences across demographic groups, particularly in gender and age, though the results are more nuanced than commonly assumed.
In the logistic regression model, gender emerged as a significant predictor, with female students more likely to use AI than their male counterparts. This finding contrasts with earlier research that often suggested male dominance in technology adoption, pointing instead to a shift in how different groups engage with AI tools.
The authors suggest that women may be more likely to adopt AI when they perceive strong utility, trust, and support in its use. This reflects broader trends in digital behavior where confidence and perceived value outweigh traditional assumptions about technological proficiency. However, this effect disappears in the linear regression analysis of usage frequency, where gender no longer plays a significant role. This indicates that while gender may influence whether students adopt AI, it does not necessarily determine how often or how intensively they use it.
Age shows a similar pattern. Within the sample, which consists primarily of students aged 17 to 25, older participants were slightly more likely to use AI. This may reflect greater academic or professional exposure rather than generational differences, as all participants belong to a digitally native cohort.
However, like gender, age does not significantly influence the frequency of AI use once adoption becomes widespread. This suggests that demographic factors matter most at the point of initial adoption but lose importance as AI becomes embedded in everyday academic practice.
The broader takeaway is that sociodemographic variables still shape access and attitudes, but their influence diminishes in environments where AI use is nearly universal. In such contexts, psychological and experiential factors take precedence.
From adoption to meaningful use: why intensity matters
With more than 99 percent of students already using AI, the researchers argue that understanding how often and how effectively students use these tools is more meaningful than simply measuring adoption.
The linear regression analysis shows that experience remains the dominant factor influencing usage frequency, followed by the purpose of use. Students who engage with AI for practical, goal-oriented tasks are more likely to use it regularly, while those with limited or unclear use cases show lower engagement.
This highlights a critical gap in current educational strategies. While many institutions have introduced AI tools, fewer have clearly defined how students should use them in specific academic or entrepreneurial contexts. Without this guidance, usage remains inconsistent and often superficial.
The study also demonstrates strong model performance, with robust predictive power and high reliability across multiple statistical measures. This strengthens confidence in the findings and underscores the importance of the identified predictors.
The research also acknowledges limitations. The sample is restricted to a specific age group and academic context, and the use of oversampling techniques introduces potential biases. Nonetheless, the consistency of the results across different analytical approaches suggests that the core conclusions are reliable.
Looking ahead, the authors call for future research to incorporate additional variables such as trust, perceived risk, and digital self-efficacy. These factors are likely to play an increasingly important role as AI systems become more complex and integrated into decision-making processes.
Implications for education policy and AI integration
According to the study, experience must be treated as a key component of AI education. Hands-on training, practical exercises, and real-world applications are essential for building familiarity and reducing barriers to use. Simply providing access to AI tools is not enough.
Next up, institutions must actively shape expectations. Demonstrating clear benefits, such as improved efficiency or enhanced learning outcomes, can significantly increase adoption and engagement. This requires integrating AI into core curricula rather than treating it as an optional add-on.
Third, trust must be addressed directly. Concerns about accuracy, bias, and ethical use remain significant obstacles. Transparent policies, clear guidelines, and ethical training can help build confidence and reduce perceived risk.
Furthermore, the study suggests that educational strategies should move beyond demographic assumptions. While gender and age influence adoption to some extent, they are not the primary drivers of sustained use. Instead, focus should shift to cognitive and motivational factors that apply across student populations.
- FIRST PUBLISHED IN:
- Devdiscourse

