GenAI in STEM education sparks new concerns over bias, identity and equity

The proposed subjective intelligence framework builds on three interconnected pillars: ethical engagement in data practices, identification of human bias in AI, and attention to multilingual and multidialectal realities in design and learning.


CO-EDP, VisionRICO-EDP, VisionRI | Updated: 29-11-2025 10:26 IST | Created: 29-11-2025 10:26 IST
GenAI in STEM education sparks new concerns over bias, identity and equity
Representative Image. Credit: ChatGPT

The rapid spread of generative artificial intelligence, GenAI, in classrooms is transforming the science and engineering education landscape, raising new concerns about fairness, identity, and the widening digital divide among students, according to a new peer-reviewed analysis published in Education Sciences.

The study, titled “Subjective Intelligence: A Framework for Generative AI in STEM Education”, examines how identity, culture, language, and social context influence students’ engagement with GenAI systems. The paper proposes a new approach that places student subjectivity at the center of GenAI integration in higher education.

Identity and bias emerge as core challenges

The study warns that the promise of GenAI is undermined by structural inequities embedded in the technologies themselves. As models are trained on large-scale internet data, they reproduce dominant cultural norms while ignoring the linguistic, social, and cultural realities of multilingual, multidialectal, and historically marginalized learners. This, the authors find, leads to feedback and suggestions that are less accurate, less contextual, and less supportive for these students.

The paper cites documented evidence showing how AI systems misidentify demographic information, reproduce racial biases in scholarly datasets, and generate content reflecting narrow cultural norms. Such behaviors can distort how students understand scientific contributions and reinforce stereotypes about who belongs in technical fields.

The authors also highlight how GenAI has intensified the “AI literacy divide,” separating students who possess strong technological familiarity from those who lack access to advanced tools due to cost, bandwidth, or institutional constraints. First-generation and minoritized students face disproportionate barriers, which the study warns could cement new layers of inequality within STEM programs.

The research also points out that many scholars still treat identity as separate from AI use, limiting their analyses to efficiency or fairness metrics rather than the deeper moral and cognitive factors that shape how students interpret technology. The authors position subjective intelligence as a necessary corrective, insisting that understanding students’ identities is essential to assessing how GenAI reshapes learning environments.

A framework rooted in cognitive, moral, and linguistic development

The proposed subjective intelligence framework builds on three interconnected pillars: ethical engagement in data practices, identification of human bias in AI, and attention to multilingual and multidialectal realities in design and learning.

The first pillar stresses that students must understand not only how to use GenAI but also how fairness, appropriateness, and integrity differ depending on cultural, academic, and disciplinary contexts. The authors argue that blanket GenAI bans in classrooms fail to support this development. Instead, they limit opportunities for students to discuss ethics, bias, and responsible use.

The second pillar focuses on how AI tools embed developer assumptions, historical biases, and incomplete representations of cultural experience. Students need structured opportunities to interrogate these forces, especially in engineering fields where design decisions carry societal consequences.

The third pillar addresses language, an area the study identifies as both underexamined and widely misunderstood. According to the authors, GenAI’s handling of language often feels unnatural to bilingual users, as the technology mirrors patterns found in written digital sources rather than the fluid, culturally negotiated practices of real communities. This disconnect can undermine STEM learning for students who rely on mixed linguistic repertoires to solve problems and interpret concepts.

The authors warn that overlooking these pillars results in learning environments where GenAI reinforces dominant identities while constraining diverse ways of thinking. Their cases show that students’ moral reasoning and perceptions of fairness emerge from their own social positions, which must be acknowledged rather than erased in AI policy decisions.

Case studies reveal gaps in GenAI’s support for inclusive STEM learning

The research is based on three instructional cases from undergraduate and graduate STEM programs as evidence of the framework’s urgency.

In the first case, students in an engineering course described how their social and academic contexts shaped their willingness to use GenAI. They expressed concerns that reflected personal identities, moral values, and fears about fairness. The authors note that these concerns revealed the limits of assuming that a single policy on GenAI could suit all learners. Students also wrestled with the consequences of algorithms making social judgments, demonstrating how deeply identity factors shape engagement with AI tools.

In the second case, a graduate seminar examined historical harms in scientific and technological fields, such as unethical studies and discriminatory design practices. These discussions helped students connect past scientific injustices with present fears about AI-driven surveillance, migration screening, and data misuse. The authors argue that these connections show how GenAI education must extend beyond tool literacy to include critical sociotechnical understanding.

The third case, an engineering design task focused on the U.S.–Mexico border, exposed the limitations of current GenAI models in engaging with cultural, linguistic, and regional complexity. When prompted to act as designers from multilingual communities, systems such as ChatGPT, Co-Pilot, and Gemini often produced content that felt formulaic, overly technical, or disconnected from lived reality. Only one model showed minimal attention to linguistic nuance, and even then, the result emphasized economic factors over community needs.

GenAI’s output patterns reflect its training data, and therefore, its language use and cultural knowledge replicate expectations rather than genuine human experience. This mismatch poses persistent risks for design-based STEM instruction, especially when students work in multilingual or culturally dynamic contexts.

Rising concerns over agency, power, and student–AI dynamics

The paper further explores deeper questions about future AI systems that may act as agentic collaborators rather than passive tools. The authors suggest that while current models tend to comply with user expectations, future AI may challenge students more forcefully. This could help stretch students’ reasoning or, if poorly designed, reinforce harmful biases and misperceptions.

The study cites evidence showing that people’s emotional responses to AI shape their willingness to trust or challenge these systems. Students may feel threatened when AI appears too competent or too humanlike, influencing team dynamics in group projects. The authors caution that as AI becomes more embedded in collaborative STEM work, educators must prepare students for complex emotional and identity-shaped interactions with nonhuman partners.

This dimension, they argue, strengthens the need for a framework grounded in subjective intelligence rather than technical optimization alone.

Widening digital divides and the risk of exclusion

The study argues that GenAI’s increasing presence in higher education exposes systemic inequalities that cannot be ignored. Students who lack subscriptions to advanced models, or who face bandwidth limits, already experience unequal access. Without intervention, these disparities may deepen academic divides and weaken long-term STEM participation for underserved populations.

These gaps intersect with identity, meaning that marginalized groups may face multiple overlapping barriers. The paper calls for designing pedagogical practices and institutional policies that express sensitivity to social context rather than applying uniform rules that mask inequity.

  • FIRST PUBLISHED IN:
  • Devdiscourse
Give Feedback