Generative AI threatens human epistemic agency in classrooms
Skilled epistemic actions refer to the teacher’s ability to make nuanced, context-driven instructional decisions. While tools like MagicSchool technically allow for modification and input, their design encourages binary choices, accept or reject, without rewarding the deeper work of tailoring content. Brisk Teaching offers similar functionality, allowing teachers to review and customize AI-generated feedback. Yet, the interface subtly promotes uncritical acceptance of default suggestions, eroding space for educators to apply their pedagogical expertise.
The integration of generative artificial intelligence or GenAI into education is accelerating across classrooms, policy frameworks, and administrative systems worldwide. As advanced AI systems like ChatGPT, Gemini, and automated lesson generators become embedded in teaching practices, a new wave of scrutiny is emerging.
A recent study titled "Beyond Tools: Generative AI as Epistemic Infrastructure in Education", published on arXiv, warns that current AI deployments risk eroding educators' professional judgment and undermining long-term educational goals unless critical epistemic safeguards are built into their design.
What role does AI now play in shaping how educators produce and evaluate knowledge?
At the heart of the study is the concept of “epistemic infrastructure” - the tools, platforms, and systems that shape how knowledge is produced, validated, and circulated in education. Historically, these infrastructures were primarily human-centered: lesson plans designed through pedagogical reasoning, assessments tailored through years of classroom experience, and knowledge shared through dialogue and deliberation. With generative AI systems now mediating these functions, the authors argue we are witnessing a profound infrastructural transformation.
The study examines how this AI-driven shift impacts "epistemic agency," defined as an individual’s ability to form beliefs, evaluate information, and actively engage in knowledge construction. Drawing from situated cognition theory and value-sensitive design methodology, the research explores how AI systems currently being adopted in schools support, or fail to support, teachers' abilities to exercise skilled judgment.
Two case studies anchor the findings. The first analyzes MagicSchool AI, a platform offering lesson plan generation tools marketed for speed and efficiency. The second focuses on Brisk Teaching, a popular Chrome extension used to automate essay feedback. In both cases, the study finds that while these systems offer time-saving features, they often strip away opportunities for deep pedagogical reasoning, customization, and critical review - cornerstones of professional teaching.
How do current AI systems limit critical thinking, customization, and professional growth in teaching?
The study evaluates AI tools against three core dimensions that influence epistemic agency: support for skilled epistemic actions, cultivation of epistemic sensitivity, and long-term habit formation.
Skilled epistemic actions refer to the teacher’s ability to make nuanced, context-driven instructional decisions. While tools like MagicSchool technically allow for modification and input, their design encourages binary choices, accept or reject, without rewarding the deeper work of tailoring content. Brisk Teaching offers similar functionality, allowing teachers to review and customize AI-generated feedback. Yet, the interface subtly promotes uncritical acceptance of default suggestions, eroding space for educators to apply their pedagogical expertise.
The issue extends to epistemic sensitivity, or the user’s ability to discern when deeper inquiry is needed. AI systems analyzed in the study rarely provide transparent indicators of content quality, alignment with curricular standards, or evidence-based reasoning. This opacity discourages educators from evaluating the validity of suggestions, gradually desensitizing them to essential pedagogical norms.
Lastly, the research points to the danger of habit formation. As teachers interact with AI systems designed for convenience, they may unknowingly develop patterns of passive reliance. For instance, generating feedback at scale using Brisk may slowly diminish the teacher’s own ability to recognize subtle writing flaws or provide motivationally attuned commentary. Over time, these habits could weaken the depth of instructional engagement and erode the professional identities of educators as epistemic authorities.
Can AI be designed to support, rather than erode, human epistemic agency in education?
The study acknowledges the immense potential of AI to augment educational processes, but insists that its design must shift toward reinforcing, not replacing, human epistemic strengths. The author proposes a framework rooted in situated cognition, one that views knowledge as emerging through interaction between the brain, body, and environment. Applying this framework to AI design means rethinking user interfaces, reward structures, and workflow integration.
One recommendation is to design AI systems that encourage “epistemic speed bumps”, moments where users are prompted to pause, reflect, and make deliberate choices rather than accepting automated outputs. Another suggestion is embedding critical transparency features, such as explanations for why specific lesson elements or feedback comments were generated, allowing educators to assess alignment with pedagogical goals.
The study also urges education stakeholders to move beyond adoption and toward participation. Teachers should be involved not just as end users but as co-designers of AI systems. Their feedback must inform how tools are developed, evaluated, and adapted over time to serve the epistemic mission of education. Without this inclusion, the risk is that AI will increasingly encode the priorities of private developers, speed, scale, cost-effectiveness, at the expense of professional autonomy, critical thinking, and the cultivation of democratic learning environments.
The paper clearly warns that if the current trajectory continues, educators may unwittingly surrender core epistemic responsibilities to black-box algorithms optimized for productivity. AI tools, in this view, are not neutral assistants, they are becoming epistemic infrastructures, deeply embedded in how educational knowledge is produced, judged, and passed on. What’s at stake is not just the quality of lesson plans or feedback forms, but the very purpose of education as a site for cultivating human understanding, deliberation, and agency.
- READ MORE ON:
- generative AI in education
- AI in classrooms
- AI in teaching practice
- how generative AI affects teacher judgment
- loss of human agency in AI-powered classrooms
- ethical concerns about AI in education
- AI replacing traditional teaching practices
- preserving epistemic agency in AI-driven education
- FIRST PUBLISHED IN:
- Devdiscourse

