Technical fixes alone cannot solve AI bias in education
The research also brings to the fore the relational dimension of ethical AI in education. Bias mitigation was most effective when educators engaged collectively rather than individually. Shared dialogue created space for diverse perspectives, challenged assumptions, and reduced reliance on AI as an unquestioned authority. This collective dimension contrasts with prevailing models of AI ethics that emphasize individual compliance or institutional policy.
A new international study argues that addressing the challenge of social inequalities in the use of AI systems in higher education will require more than technical safeguards. It will demand a fundamental shift in how educators engage with AI itself.
The study Dialogic Reflection and Algorithmic Bias: Pathways Toward Inclusive AI in Education, published in Trends in Higher Education, examines how structured dialogue and critical reflection among educators can play a decisive role in identifying and mitigating algorithmic bias in educational AI systems. Based on an extensive qualitative intervention in a Latin American university context, the study positions teachers not as passive users of AI tools, but as ethical intermediaries with agency and responsibility.
Algorithmic bias enters the classroom
Algorithmic bias refers to systematic and unfair distortions in AI outputs that disadvantage certain individuals or groups. In education, these biases can manifest in subtle but consequential ways. Language models may favor dominant cultural norms, marginalize non-standard dialects, or reproduce gender and racial stereotypes. Automated assessment tools may misinterpret student responses shaped by diverse linguistic, cultural, or socioeconomic backgrounds. Recommendation systems may steer students toward or away from opportunities based on patterns that reflect historical inequality rather than individual potential.
Generative AI systems are increasingly used to generate feedback, summarize readings, assist with grading, and support instructional design. While these tools promise efficiency and personalization, their outputs are shaped by training data drawn largely from Global North contexts and dominant languages. Without critical oversight, their deployment can unintentionally normalize exclusionary assumptions.
The authors argue that algorithmic bias in education is not simply a technical flaw but a socio-technical phenomenon. Bias emerges through the interaction of data, models, institutional norms, and human interpretation. As a result, solutions that focus exclusively on improving datasets or refining algorithms risk overlooking how educators themselves mediate AI outputs in practice.
To explore this dynamic, the authors conducted a qualitative action-research study involving 102 university professors and postgraduate students in the Dominican Republic. Participants engaged in structured workshops and sustained online dialogue designed to surface assumptions about AI, examine real examples of biased outputs, and collectively reflect on ethical responsibilities. Rather than treating bias as an abstract concept, the intervention grounded discussion in concrete classroom scenarios.
The research shows that many participants initially perceived AI as neutral or objective, reflecting a widespread tendency to attribute authority to algorithmic systems. Through dialogic reflection, however, this perception shifted. Participants became more attentive to how prompts, context, and interpretation influence AI behavior, and how uncritical use could reinforce existing inequalities.
Dialogic reflection as an ethical intervention
The study discusses dialogic reflection, a pedagogical approach rooted in collaborative discussion, critical questioning, and shared meaning-making. Unlike top-down training or compliance-focused ethics guidelines, dialogic reflection emphasizes collective engagement with complex issues that lack simple answers.
In the intervention, participants were encouraged to analyze AI-generated outputs, identify potential biases, and discuss their implications for students. These discussions extended beyond identifying problems to exploring why biases arise and how educators might respond. The process was iterative, with insights from one discussion informing subsequent reflection.
The authors analyzed participant contributions using qualitative analysis across five dimensions: participation quality, bias identification strategies, ethical responsibility, perceived social impact, and proposals for inclusive practice. The results show a clear progression. As dialogue deepened, participants demonstrated greater sensitivity to bias, stronger ethical awareness, and increased willingness to adapt their use of AI tools.
Awareness alone is not sufficient. While participants developed a more nuanced understanding of algorithmic bias, translating that awareness into concrete pedagogical change required sustained engagement. Dialogic reflection functioned as a bridge between ethical recognition and practical action, helping educators move from abstract concern to context-specific strategies.
These strategies included redesigning prompts to reduce biased outputs, cross-checking AI-generated feedback with human judgment, and contextualizing AI use within local sociocultural realities. Importantly, the study highlights that such practices are not one-time fixes but ongoing processes that evolve as AI systems and educational contexts change.
The research also brings to the fore the relational dimension of ethical AI in education. Bias mitigation was most effective when educators engaged collectively rather than individually. Shared dialogue created space for diverse perspectives, challenged assumptions, and reduced reliance on AI as an unquestioned authority. This collective dimension contrasts with prevailing models of AI ethics that emphasize individual compliance or institutional policy.
Rethinking responsibility for inclusive AI in education
The authors propose a hypothesis-driven model linking dialogic reflection to bias awareness and, ultimately, to inclusive teaching practices. In this model, educators are positioned as ethical agents who actively shape how AI affects students, rather than as end-users constrained by technological design.
This framing challenges narratives that locate responsibility primarily with developers or regulators. While acknowledging the importance of technical design and policy oversight, the study argues that ethical outcomes also depend on how AI is embedded in pedagogical relationships. Decisions about when to use AI, how to interpret its outputs, and how to contextualize its recommendations are inherently value-laden.
The Dominican Republic context plays a significant role in this argument. Operating in a Global South setting, participants were acutely aware of how imported technologies can misalign with local realities. The study shows that dialogic reflection enabled educators to critically assess whether AI tools developed elsewhere reflected their students’ linguistic, cultural, and socioeconomic contexts. This sensitivity is particularly important as AI adoption accelerates in regions historically underrepresented in training data.
Rather than relying solely on technical training or ethical checklists, institutions may need to invest in spaces for sustained dialogue about AI use. Faculty development programs that incorporate critical discussion, peer learning, and reflective practice could be more effective in fostering inclusive AI use than one-off workshops.
- FIRST PUBLISHED IN:
- Devdiscourse

